Agile Browser Based Testing with SpecFlow
This may be a little misleading as you may think that I am going to give some sort of Scrumish methodology for browser based testing with SpecFlow. Actually, this is more about how I implemented a feature to make browser based testing more realistic in a CI build.
Browser based testing is slow, real slow. So, if you want to integrate this type of testing into your CI build process you need a way to make the tests run faster or it may add considerable time to the feedback loop given to developers. My current solution is to only run tests for the current sprint. To do this I use a mixture of SpecFlow and my own home grown test framework, TestPipe, to identify the tests I should run and ignore.
The solution I’m using at the moment centers on SpecFlow Tags. Actually, I have blogged about this before in my SpecFlow Tagging post. In this post I want to show a little more code that demonstrates how I accomplish it.
Common Scenario Setup
The first step is to use a common Scenario setup method. I add the setup as a static method to a class accessible by all Step classes.
public class CommonStep { public static void SetupScenario() { try { Runner.SetupScenario(); } catch (CharlesBryant.TestPipe.Exceptions.IgnoreException ex) { Assert.Ignore(ex.Message); } }
TestPipe Runner Scenario Setup
The method calls the TestPipe Runner method SetupScenario that handles the Tag processing. If SetupScenario determines that the scenario should be ignored, it will throw the exception that is caught. We handle the exception by Asserting the the test ignored with the test frameworks ignore method (in this case NUnit). We also pass the ignore method the exception method as there are a few reasons why a test may be ignored and we want the reason included in our reporting.
SetupScenario includes this bit of code
if (IgnoreScenario(tags)) { throw new IgnoreException(); }
Configuring Tests to Run
This is similar to what I blogged about in the SpecFlow Tagging post, but I added a custom exception. Below are the interesting methods for the call stack walked by the SetupScenario method.
public static bool IgnoreScenario(string[] tags) { if (tags == null) { return false; } string runTags = GetAppConfigValue("test.scenarios"); runTags = runTags.Trim().ToLower(); return Ignore(tags, runTags); }
This method gets the tags that we want to run from configuration. For each sprint there is a code branch and the app.config for tests in the branch will contain the tags for the tests we want to run for the sprint on the CI build. There is also a regression branch that will run all the tests which runs weekly. All of the feature files are kept together, so being able to tag specific scenarios in a feature file to run gives the ability to run specific tests for a sprint while keeping all of the features together.
Test Selection
Here is the selection logic.
public static bool Ignore(string[] tags, string runTags) { if (string.IsNullOrWhiteSpace(runTags)) { return false; } //If runTags has a value the tag must match or is ignored if (tags == null) { throw new IgnoreException("Ignored tags is null."); } if (tags.Contains("ignore", StringComparer.InvariantCultureIgnoreCase)) { throw new IgnoreException("Ignored"); } if (tags.Contains("manual", StringComparer.InvariantCultureIgnoreCase)) { throw new IgnoreException("Manual"); } if (runTags == "all" || runTags == "all,all") { return false; } if (tags.Contains(runTags, StringComparer.InvariantCultureIgnoreCase)) { return false; } return true; }
This provides the meat of the solution where most of the logic for the solution is. As you can see the exceptions contain messages for when a test is explicitly ignored with the Ignore or Manual tag. The manual tag identifies features that are defined, but can’t be automated. This way we still have a formal definition that can guide our manual testing.
The variable runTags holds the value retrieved from configuration. If the config defines “all” or “all,all”, we run all the tests that aren’t explicitly ignored. The “all,all” is a special case when ignoring test at the Feature level, but this post is about Scenario level ignoring.
The final test is to compare the tags to the runTags config. If the tags include the runTags, we run the test. Any tests that don’t match are ignored. For scenarios this only works for one runTag. Maybe we name the tag after the sprint, a sprint ticket, or whatever it is it has to be unique for the sprint. I like the idea of tagging with a ticket number as it gives tracability to tickets in the project management system.
Improvements and Changes
I have contemplated using a feature file organization similar to SpecLog. They advocate a separate folder to hold feature files for the current sprint. Then I believe, but not sure, that they tag the current sprint feature files so they can be identified and ran in isolation. The problem with this is that the current sprint features have to be merged with the current features after the sprint is complete.
Another question I have asked myself is do I want to allow some kind of test selection through command line parameters. I am not really sure yet. I will put that thought on hold for now or until a need for command line configuration makes itself evident.
Lastly, another improvement would be to allow specifying multiple runTags. We would have to then iterate the run tags and compare or come up with a performant way of doing. Performance would be an issue as this would have to run on every test and for a large project there could be thousands of tests with each test already taking having an inherent performance issue in having to run in a browser.
Conclusion
Well that’s it. Sprint testing can include browser based testing and still run significantly faster than running every test in a test suite.