Category: Quality

Page Object as Collection of Controls as Collection of Elements

I love the whole modular development ideal. Creating small discrete modular chunks of functionality instead of large monolithic globs of interdependent functionality helps to promote code reuse and increase maintainability of a code base. Well, for me, it was natural to extend this design pattern to the Page Object Model in functional browser based testing.

In ASP.NET Web Forms there is a style of web development that favors building the various parts of a page in User Controls. Your pages become a collection of User Controls that you can stitch together at runtime. So, your home page might have a main content control, slide show control, and secondary content control. Wrapping the home page could be a master page that has a masthead control, navigation control, and footer control. In each control would be the elements that make up a particular section of a web page. Anytime you need new functionality, you build a new user control and plug it into your application. When I am building my page objects for my tests I figured I would follow the same concept.

When I model a web page as a page object I start with the page wrapper that provides a shell for the other objects contained in the page. I will model the various user controls as control objects and I will add them to the page object that represents the page. This modularization also helps me to quickly and easily compose new pages as I don’t have to recreate common page parts.

The page wrapper just functions as a container control object and can provide communication between the controls and a central point to access page state. I try to keep the page wrapper light on functionality and focus on composition to provide the functionality that tests need.

I mentioned master pages and I model master pages through inheritance instead of composition. If the web page I am modeling uses a master page the page object for the page will inherit from another page object that models the master page. This is another way to cut down on duplication and while increasing maintainability.

This pattern is probably something common in the testing community, so I need to do more research on it. It is a work in progress for me as I am still not solid on how to implement the composition. Should the child objects know about their containing page object, how should I manage the WebDriver across controls, what if a User Control is added to a page multiple times, how should I model it. I am trying different solutions to these problems and more common and edge cases that get me stuck in the mud from time to time. Hopefully, I can provide some tried and true strategies for this extension of the page object pattern as I exercise it over the next year.

For now here is sort of where I am. I start with an interface to define the page object contract and a base page to define the core functionality all pages should have. From these abstractions I build up an actual page model as I described earlier by composing control objects that in turn compose element objects.

Below is some early code for my page abstractions. I won’t go into the specifics of this code, but you can get the gist of where I am headed. One thing to note is that I have abstracted the concept of Browser and Test Environment. This gives me flexibility in the usage of various Browser Automation Frameworks and the ability to easily configure tests for various test environments. Actually, I also have a base control object to model User Controls and an object that models page elements (think WebDriver element, but abstract so I can wrap any Browser Automation Framework). OK, last note is PageKey is used in my reporting module. As test results are collected I also store the page key with the results so that I have traceability and can be more expressive with analysis of the result data.

//This is not production code
public interface IPage
{
    string PageKey { get; }
    string PageUrl { get; }
    string PageVirtualUrl { get; }
    ITestEnvironment TestEnvironment { get; }
    string Title { get; }
    bool HasTitle();
    bool HasUrl();
    bool IsOpen();
    void Open();
}

public class BasePage : IPage
{
    public BasePage(string pageKey, Browser browser, ITestEnvironment environment)
    {
        this.Initialize();
        this.PageKey = pageKey;
        this.Browser = browser;
        this.TestEnvironment = environment;
    }
    private BasePage()
    {
    }
    public Browser Browser { get; protected set; }
    public string PageKey { get; protected set; }
    public string PageUrl
    {
        get
        {
            return this.GetPageUrl();
        }
    }
    public string PageVirtualUrl { get; protected set; }
    public ITestEnvironment TestEnvironment { get; protected set; }
    public string Title { get; protected set; }
    public virtual bool HasTitle()
    {
        if (this.Title == this.Browser.Title)
        {
            return true;
        }
        return false;
    }
    public virtual bool HasUrl()
    {
        if (!string.IsNullOrEmpty(this.PageUrl))
        {
            if (this.Browser.HasUrl(this.PageUrl))
            {
                return true;
            }
        }
        return false;
    }
    public virtual bool IsOpen()
    {
        if (!this.HasUrl())
        {
            return false;
        }
        return this.HasTitle();
    }
    public virtual void Open()
    {
        Browser.Open(this.PageUrl);
    }
    private string GetPageUrl()
    {
        if (this.TestEnvironment == null)
        {
            return string.Empty;
        }
        string baseUrl = this.TestEnvironment.BaseUrl;
        string virtualUrl = this.PageVirtualUrl;
        if (string.IsNullOrEmpty(baseUrl))
        {
            return string.Empty;
        }
        if (!baseUrl.EndsWith("/"))
        {
            baseUrl += "/";
        }
        if (virtualUrl.StartsWith("/"))
        {
            virtualUrl = virtualUrl.Substring(1);
        }
        return string.Format("{0}{1}", baseUrl, virtualUrl);
    }
 }

Generate Test Service from WSDL

I had the genius idea to create a test service implementation to test our client services. I would use the test service as a fake stand-in for the real services as the real services are managed by third parties. Imagine wanting to test your client calls to the Twitter API without having to get burned by the firewall trying to make a call to an external service.

As usual, it ends up that I am not a genius and my idea is not unique. There isn’t a lot of info on how to do it in the .Net stack, but I did find some discussions on a few Bings. I ended up using wsdl.exe to extract a service interface from the WSDL and implementing the WSDL with simple ASMX files. I won’t go into the details, but this is basically what I did:

    1. Get a copy of the WSDL and XSD from the actual service.
    2. Tell wsdl.exe where you stored these files and what you want to do, which is generate a service interface
      wsdl.exe yourFile.wsdl yourfile.xsd /l:CS /serverInterface
    3.  Then implement the interface as an ASMX (you could do WCF, but I was in a hurry)
    4. Lastly, point your client to your fake service

In your implementation, you can return whatever you are expecting in your tests. You can also capture the HTTP Messages. Actually, the main reason I wanted to do this was to figure out if my SOAP message was properly formatted and I didn’t want to go through all of the trace listener diagnostics configuration with a Fiddler port capture crap. There may be easier ways to do this and even best practices around testing service clients, but this was simple, fast, and easy, IMHO, and it actually opened up more testing opportunities for me as its all code and automatable (is that a word).

So You Want To Automate Web Based Testing

I am on my 3rd enterprise scale automated test project and figured that I should try to begin the process of distilling some of the lessons learned and best practices I have learned along the way. I am by no means an expert in automated testing, but with my new position as Automation Engineer I plan on being one. In addition to my day job, I am in the process of building an automated functional testing framework, Test Pipe. It’s part of my push to improve my Code Quality Pipeline in my personal projects. So, with all of the focus on automated testing I have right now, it’s a good time to start formalizing the stuff in my head in a way that I can share it with the development community. Don’t expect much from this first post and I am sorry if it gets incoherent. I am writing from the hip and may not have time for a heavy edit, but I have to post it because I said I would post on the 1st. As I get a better understanding of how to express my experience hopefully I will have much better posts coming up.

Test Environment

First I want to talk a little about the test environment. You should be able to build and deploy the application you want to test.This is more for Developers in Test or testers that actually write code. This is not necessary in every situation, but if you are responsible for proving that an application works as required, then you should be able to get the latest version of the app, build it, deploy it, configure it for testing, and test it. I am of the opinion that you should be able to do this manually before you actually get into the nuts and bolts of automating it. It is important to have a firm grasp on the DevOps Pipeline for your application if you are looking to validate the quality of the application’s development.

This isn’t feasible for every application, but having a local test environment is definitely worth the effort. Being able to checkout code, build and deploy it locally gives you visibility into the application that is hard to achieve on a server. Also, having a local environment affords you a more flexible work space to experiment and break stuff without affecting others on your team.

You should also have a server based test environment apart from the development and business QA environment. This could be physical or virtualized servers, but you should have a build server and the application and web servers necessary to deploy your application. Once you have your Quality Pipeline automated, you don’t want developers or QA messing around on the server invalidating your tests.

Test Configuration

Configuring the application for test is a big subject and you should give lots of thought into how best to manage the configuration. To configure the application for test you may want to create the application database, seed it with test data, update the configuration to change from using an actual SMTP server to a fake server that allows you to test email without needing the network…and more. Then there is configuring the type of test. You may want to run smoke tests every time developers check in code, a full functional test nightly, and a full regression before certifying the release as production ready. These tests can mean different things to different people, but the point is you will probably need to configure for multiple types of tests. On top of that, with web based and mobile testing you have to configure for different browsers and operating systems. Like I said configuration is a big subject that deserves a lot of thought and a strategy in order to have an effective Quality Pipeline.

Test Code

I won’t talk about the details of test code here, but a little about test code organization. Most of my test experience is in testing web applications and services on a Microsoft stack, but a well structured test code base is necessary for any development stack. I generally start with 4 basic projects in my test solutions:

  • Core – this provides the core functionality and base abstractions for the test framework.
  • Pages – this provides an interactive model of the pages and controls under test.
  • Data – this provides functionality to manage test data and data access.
  • Specs – this provides the definition and implementation of tests and utilizes the other three projects to do its job.

In reality, there are more projects to a robust testing framework, but this is what I consider my base framework and I allow the other projects to be born out of necessity. I may have a projects to provide base and extend d functionality for my browser automation framework to provide browser access to the Pages project. I may have a project to capture concepts for test design to provide better organization and workflow to the Specs project. I will have even more projects as I gain an understanding of how to test the application and my architecture will evolve as I find concepts that I foresee reusing or replacing. A typical start to a new project may look something like the layout below (this is somewhat similar to the TestPipe project I have underway to provide a ready made solution for a base test framework).

  • Core
    • IPage – contract that all pages must implement.
    • BasePage – base functionality that all pages inherit from.
    • TestHelper – various helpers for things like workflow, logging, and access to wrappers around testing tools like SpecFlow.
    • Cache – an abstraction and wrapper around a caching framework.
    • In Test Pipe, I also abstract the browser and page elements (or controls) so I am not dependent on a specific browser automation framework.
  • Pages
    • Section – I generally organize this project somewhat similar to that of the website sitemap and a section is just a grouping of pages and there would be a folder for each section. It may make more sense for you to organize this by functional grouping, but following the sitemap has worked for me so far.
      • Page – page is a model of a page or control and there is one for each page or control that needs to be tested.
  • Specs
    • Features – I have a secret love affair with SpecFlow and this folder holds all of the SpecFlow Feature files for the features I am testing.
      • Under the Features folder I may or may not have Section folders similar to the Pages project, but as I stated this could be functional groupings if it makes more sense to you.
    • Steps
      • Organizing steps is still a work in progress for me and I am still trying to find an optimal way to organize them. Steps are implementations of feature tests, but they are also sometimes abstract enough that they can be used for multiple features so I may also have a Sections folder, but also a folder for Common or Global folder for global or common steps or other units of organization.
  • Data
    • DTO – these are just plain old objects and they mirror tables in the database (ala active record).
    • TestData – this provides translation from DTO to what’s needed in the test. The actual data that is needed in tests can be defined and configured in an XML file, spreadsheet or database and the code in this folder manages seeding the test data in the database and the retrieving the data for the test scenarios based on the needs of the tests defined in the test data configuration.
    • DataAccess – this is usually a micro-ORM, PetaPoco, Massive…my current flavor is NPoco. This is just used to move data to and from the database to the DTOs. We don’t need a full blown ORM for our basic CRUD needs, but we don’t want to have to write a bunch of boilerplate code either so micro-ORMs provide a good balance.

Automated Test Framework

I use a unit test framework as my Automated Test Framework. The unit test framework is used as the test runner for the test scenarios steps and uses a browser automation framework like Selenium or Watin (.Net version of Watir) to drive the browser. The browser automation is hidden behind the Page objects, and we will discuss this later. The main take away is that unit test frameworks are not just for unit tests and I call them automated test frameworks.

I use both NUnit and MSTest as my unit test framework. To be honest, I haven’t run into a situation where MSTest was a terrible choice, but I have heard many arguments from the purist out there on why I should use something else if I want to achieve test enlightenment. I use NUnit because I use it at work and there is a strong community for it, but 9 times out of 10 I will use MSTest as my framework if there is no precedence to use another one.

Another benefit of MSTest is it doesn’t need any setup in Visual Studio as it comes out the box with Visual Studio. The integration with Visual Studio and the MS stack is great when you want something quick to build a test framework around. It just works and its one less piece of the puzzle I have to think about.

If you have a requirement that points more in the direction of another unit test framework, as long as it can run your browser automation framework, use it. Like I said I also use NUnit and actually, its not that difficult to set up. To setup NUnit I use NuGet. I generally install the NuGet package along with the downloadable MSI. The NuGet package only includes the framework so I use the MSI to get the GUI setup. With Visual Studio 2013 you also get some integration in that you can run and view results in the VS IDE so you really don’t need the GUI if you run 2013 or you install an integrated runner in lower versions of VS.

In the end, it doesn’t matter what you use for your test runner, but you should become proficient at one of them.

Browser Automation

I use Selenium WebDriver as my browser automation framework, but you could use anything that works for your environment and situation. WaitN is a very good framework and there are probably more. WebDriver is just killing it in the browser automation space right now. In the end, you just need a framework that will allow you to drive a browser from your Automated Test framework. You could roll your own crude driver that makes HTTP requests if you wanted to, but why would you? Actually, there is a reason, but let’s not go there.

Again, I am a .Net Developer, and I use NuGet to install Selenium WebDriver. Actually, there are two packages that I install, Selenium WebDriver and Selenium WebDriver Support Classes. The Support Classes provide helper classes for HTML Select elements, waiting for conditions, and Page Object creation.

Test Design

As you build tests you will discover that patterns emerge and some people go as far as building DSL (domain specific languages) around the patterns to help them define tests in a more structured and simplified manner. I use SpecFlow to provide a more structured means of defining tests. It is an implementation of Cucumber and brings Gerkihn and BDD to .Net development. I use it to generate the unit test code and stubs of the actual steps that the automation test framework code calls to run tests.

In the step stubs that SpecFlow generates for the test automation framework I call page objects that call the browser automation framework to drive the browser as a user would for the various test scenarios. Inside of the steps you will undoubtedly find patterns that you can abstract and you may even go as far as creating a DSL, but you should definitely formalize how you will design your tests.

More .Net specific info. I use NuGet to install SpecFlow also. Are you seeing a pattern here? If you are a .Net developer and you aren’t using NuGet, you are missing something special. In addition to NuGet you will need to download the SpecFlow Visual Studio integration so that you get some of the other goodies like item templates, context menus, and step generation…etc.

Build Server

I use both Team Foundation Server Build and Cruise Control. Again, the choice of platform doesn’t matter, this is just another abstraction in your Quality Pipeline that provides a service. Here the focus is on being able to checkout code from the source code repository and build it and do it all automatically without your intervention. The build server is the director of the entire process. After it builds the code it can deploy the application, kick off the tests and collect and report test results.

I use Nant with Cruise Control to provide automation of the deployment and configuration. TFS can do the same. They both have the ability to manage the running of tests, run static code analysis, and report on the quality of the build.

Conclusion

Well that’s the low down on the parts of my general test framework. Hopefully, I can keep it going and give some insight into the nitty gritty details of each piece.

Release the Code Quality Hounds

I wanted to start using more quality tools as a part of my initiative to improve my Code Quality Pipeline. So I decided to implement Code Analysis to insure the project maintains certain coding standards. In addition to the FxCop style static analysis of the MSIL code by Visual Studio Code Analysis I decided to also use StyleCop to validate that my source code is up to standard. This would be a major undertaking in most legacy applications, but if you have been following Microsoft coding standards or you’re working in a greenfield app I would definitely recommend implementing these code quality cop bots.

Code Analysis

When you enable Code Analysis for your managed code it will analyze your managed assemblies and reports information about the assemblies, such as violations of the programming and design rules set forth in the Microsoft .NET Framework Design Guidelines.

To get Code Analysis up and running I right clicked on my project and clicked properties. Then I clicked on Code Analysis.

codeanalysis

This screen allows us to configure code analysis. What I wanted to do first is create a custom rule set. This allows me to configure and save the rules I want for code analysis. So, I selected Microsoft All Rules, clicked open, then File > Save As and saved the rule set file in my solution root folder. Then I edited the file in NotePad++ to give it a better name. Then I went back to VS and selected my custom rule set.

To enable Code Analysis on build, I checked the box for Enable Code Analysis on Build. You may want to only do this on your release target or other target that you run before certifying a build production ready. I did this on every project in the solution, including tests as I didn’t want to get lazy with my quality in tests. Having Code Analysis enabled will also cause Code Analysis to run on my Team Build Gated Check-in build as it is set to run Code Analysis as configured on the projects.

Also, I wanted Code Analysis violations treated as errors on build, so I added this to the debug Property Group

 <CodeAnalysisTreatWarningsAsErrors>true</CodeAnalysisTreatWarningsAsErrors>

Lastly, to do a quick test, I right clicked on my Solution in Solution Explorer and clicked “Run Code Analysis on Solution.” Since I had some code already written it did return some issues which I fixed. I then checked the code in to TFS and the code analysis also ran on Team Build.

StyleCop

StyleCop analyzes C# source code to enforce a set of style and consistency rules. First I downloaded and installed StyleCop.

For some reason getting StyleCop up and running wasn’t as easy as it usually is. So I am going to explain an alternate route, but bear in mind that the best route is Nuget.

Actually, I installed StyleCop from Nuget, but it didn’t configure the tooling in Visual Studio properly so I download the package from the project site, http://stylecop.codeplex.com, and reinstalled from the downloaded installer. I tried adding the Nuget package to add the StyleCop MSBuild target, but that too resulted in too many issues. So I followed the instructions in the StyleCop docs to get a custom build target installed for my projects, http://stylecop.codeplex.com/wikipage?title=Setting%20Up%20StyleCop%20MSBuild%20Integration&referringTitle=Documentation.

I decided to show StyleCop violations as errors. So, I added

<StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>

to the project files. This instructs the project to treat StyleCop violations as a build error.

<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
    <ProjectGuid>{077D5FD7-F665-4CB1-947E-8E89D47E2689}</ProjectGuid>
    <OutputType>Library</OutputType>
    <AppDesignerFolder>Properties</AppDesignerFolder>
    <RootNamespace>CharlesBryant.AppNgen.Core</RootNamespace>
    <AssemblyName>AppNgen.Core</AssemblyName>
    <TargetFrameworkVersion>v4.5</TargetFrameworkVersion>
    <FileAlignment>512</FileAlignment>
    <SccProjectName>SAK</SccProjectName>
    <SccLocalPath>SAK</SccLocalPath>
    <SccAuxPath>SAK</SccAuxPath>
    <SccProvider>SAK</SccProvider>
    <StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>
    <StyleCopTargets>..\..\..\StyleCop.Targets</StyleCopTargets>
  </PropertyGroup>

I wanted StyleCop integrated into my local and Team Build so I added the build target to the project files. The build target is installed when you install StyleCop, if you select it. You will have to copy the target from the Program Files/MSBuild folder to a folder within your solution folder so it can be checked into TFS. Then you can point to it in your project files to cause StyleCop to run on build by adding

<StyleCopTargets>..\..\..\StyleCop.Targets</StyleCopTargets>

to your PropertyGroup and

<Import Project="$(StyleCopTargets)" />

If I were you, I would stick with the StyleCop.MSBuild Nuget package to integrate StyleCop in your build. The route I took and explained above required a lot of debugging and reworking that I don’t want to blog about. I actually, tried the Nuget package in another project and it worked perfectly.

Conclusion

In the end, I have Code Analysis and StyleCop running on every build, locally or remote, and violations are treated as errors so builds are prevented when my code isn’t up to standard. What’s in your Code Quality Pipeline?

Visual Studio Architecture Validation

As a part of my Code Quality Pipeline I want to validate my code against my architectural design. This means I don’t want invalid code integrations, like client code calling directly into data access code. With Visual Studio 2012 this is no problem. First I had to create a Modeling Project. Then I captured my architecture as a layer diagram. I won’t go over the details of how to do this, but you can find resources here

http://www.dotnetcurry.com/ShowArticle.aspx?ID=848
http://msdn.microsoft.com/en-us/library/57b85fsc(v=vs.110).aspx

Next I added true to my model project’s .modelproj file. This instructs MSBuild to validate the architecture for each build. Since this is configured at the project level it will validate the architecture against all of the layer diagrams included in the project.

For a simpler way to add the configuration setting here is a MSDN walk through – http://msdn.microsoft.com/en-us/library/dd409395(v=vs.110).aspx#ValidateAuto

1. In Solution Explorer, right-click the modeling project that contains the layer diagram or diagrams, and then click Properties.
2. In the Properties window, set the modeling project’s Validate Architecture property to True.
This includes the modeling project in the validation process.
3. In Solution Explorer, click the layer diagram (.layerdiagram) file that you want to use for validation.
4. In the Properties window, make sure that the diagram’s Build Action property is set to Validate.
This includes the layer diagram in the validation process.

Adding this configuration to the project file only validates my local build. As part of my Quality Pipeline I also want to validate on Team Build (my continuous build server). There was some guideance out there in the web and blogosphere, but for some reason my options did match what they were doing. You can try the solution on MSDN (http://msdn.microsoft.com/en-us/library/dd409395(v=vs.110).aspx#ValidateAuto). Like I said, this didn’t work for me. I had to right click the build definition in Build Explorer and click Edit Build Definition. On the Process tab, under Advanced, I added /p:ValidateArchitecture=true to MSBuild Arguments.

Now my code is guarded against many of the issues that result from implementations that violate the designed architecture.

SpecFlow Tagging

It seems like SpecFlow tagging has been a theme in many of my recent posts. Its a very powerful concept and central to how I control test execution. Hopefully, this will give someone some inspiration. When I learned about tagging everything else seemed to click in regards to my understanding of the SpecFlow framework. Tagging was a bridge for me.

A Tag in SpecFlow is a way to mark features and scenarios. A Tag is an ampersand, @, and the text of the tag. If you tag a feature, it will apply to all feature’s scenarios and tagging a scenario will apply to all of the scenario’s steps.

Out the box SpecFlow implements the @ignore tag. This tag will generate an ignore attribute in your unit test framework. Although this is the only out the box tag, you can create as many tags as you like and there are a lot of cool things you can do with them.

SpecFlow uses tags to generate categories or attributes that can group and control test execution in the unit test framework that is driving your tests. Tags are also used control test execution outside the unit test framework in SpecFlow’s event bindings and scoped bindings. You also have access to tags in your test code through the ScenaroContext.Current.ScenarioInfo.Tags property.

Another benefit of tagging is that the tags can be targeted from the command line. I can run my tests from the command line and indicate what tags I want to run. This means I can control testing on my continuous integration server.

As you can see, tags are very powerful indeed in shaping the execution of your tests. Below I will explain how I am currently using tags in a standardized way in my test framework. My tagging scheme is still a work in progress and I am honing in on the proper balance of tags that will allow good control of the tests without creating accidental complexity and a maintenance nightmare.

In the feature file for UI tests I have the following tags:

  • Test Type – in our environment we run Unit, Integration, Smoke, and Functional tests, in order of size and speed of the tests. A feature definition should only include one test type tag, but there can be situations where a Functional could include lower tests, but no other test types should be mixed. So you could have @Smoke and @Functional, but not @Smoke and @Unit.
  • Namespace – each C# namespace segment is a tag. For example, if I have a namespace of Foo.Bar then my tags would be @Foo @Bar
  • SUT – the system under test is the class or name of the page or control
  • Ticket Number – the ticket number that the feature was created or changed on (e.g. @OFT11294). We prefix the number to better identify this tag as a ticket.
  • Requirement – this is a reference to the feature section in the business requirements document that the feature spec is based on (e.g. @Rq4.4.1.5). We prefix the number to better identify this tag as a requirement.

With the above examples our feature definition would look like this:

@Integration @Foo @Bar @OFT11294 @Rq4.4.1
Feature: Awesome  Thing-a-ma-gig

This allows me to target the running of tests for different namespaces, test types, SUT, ticket numbers, requirements or any combination of them. When a developer deploys changes for a ticket, I can just run the tickets that target the ticket. This is huge in decreasing the feedback cycle for tests. Instead of having to run all the tests, and this could be hours, we can run a subset and get a quicker response on the outcome of the tests.

At the scenario level we want to tag the system under test (SUT). This allows us to run test for a particular  SUT, but it also gives us the flexibility of hooking behavior into our test runs. Say I want to instantiate a specific class for each scenario, if I did a BeforeFeature Hook with no tagging it would apply to every scenario in the test assembly because SpecFlow Hooks are global. With tagging, it will run for scenarios with matching tags.

…Feature File

@Integration @Foo @Bar @OFT11294 @Rq4.4.1
Feature: Awesome  Thing-a-ma-gig

@Thingamagig @Rq4.4.1.5
Scenario: Awesome Thing-a-ma-gig works

…Step Class

[BeforeScenario(“Thingamagig“)]
public static void ScenarioSetup()
{
sut = new Thingamagig();
}

We have the @Ignore tag that we can apply to features and scenarios to signal to the test runner to not run the tagged item. There is also a @Manual tag that functions like the @Ignore tag for features and scenarios that have to be ran manually. I did some custom logic to filter the @Manual tag, but you can find a simple way to do it on this short post on SpecFlow Manual Testing.

In my test framework I have fine grained control of test execution through a little helper class I created. I won’t bore you with all of the code, but basically I use a scoped BeforeFeature binding to call a bit of code that decides if the feature or scenario should be ran or not. Yes this kind of duplicates what SpecFlow and the unit test framework already does, but I am a control freak. This code is dependent on SpecFlow and NUnit.Framework.

if (IgnoreFeature(FeatureContext.Current.FeatureInfo.Tags))
{
Assert.Ignore();
return;
}

The IgnoreFeature() method will get the tags to run or ignore from a configuration file. If the tag in FeatureContext.Current.FeatureInfo.Tags matches an ignore tag from configuration it will return true. If the tag matches a run tag it will return false. We also include matching to ignore the @Ignore and @Manual, even though there is already support for @Ignore. This same concept applies to scenarios and ScenarioContext.Current.ScenarioInfo.Tags that are evaluated in a global BeforeScenario binding. In the example above I am using Assert.Ignore() to ignore the test. As you probably know Ignore in unit test frameworks is usually just an exception it throws to immediately fail the test. In my actual test framework, I replace Assert.Ignore() with my own custom exception that I throw that will allow the ignored tags to be logged.

With this method of tagged based ignoring using a configuration file, we could add a tag for environment to control test execution by environment. I say this because I have seen many questions about controlling tests by environment. The point is, there are many ways to use tags and the way I use them is just one way. You can tag how you want and there are some great posts out there to give you inspiration on how to use them for your situation.

Pros and Cons of my SpecFlow Tagging Implementation

Pros:

  • Fine grained control of test execution.
  • Controlling tests through configuration and command line execution.
  • Tractability of tests to business requirements and work tickets.

Cons:

  • Tags have no IntelliSence in VisualStudio.
  • Tags are static strings so spelling is an issue.
  • When a namespace or SUT name changes we have to remember to change the name in the feature and step files.
  • Tags can get messy and complicating, especially when test covers multiple tickets or features.

Ref:

First Post as an Automation Engineer

As you probably don’t know, I have been given the new title of Automation Engineer. I haven’t really been doing much automation besides a short demo project I gave a brief presentation on. When, I got the green light to shuck some of my production development duties, as I am still an active Production Developer, and concentrate on automation I decided to start with an analysis of the current automation framework.

My first task was to review the code of our junior developer (now a very competent developer “sans junior”). He was hired as part of our grad program and was tasked with automating UI tests for our public facing websites. We didn’t have any real UI automation before he started working on it so there was no framework to draw from and he was basically shooting from the hip. He has been guided by our dev manager and has received some input from the team, but he was basically given a goal and turned loose.

He actually did a great job, but having no experience in automation there were bound to be issues. This would even hold true for the most seasoned developer. This post is inspired by a code review of his code. First let’s set some context. We are using Selenium WebDriver, SpecFlow, NUnit, and the Page Object Model pattern. I can’t really show any code as its private, but as you will see from my brain dump below that it allowed me to think about some interesting concepts (IMHO).

Keep Features and Scenarios Simple

I am no expert on automated testing, yet, but in my experience with UI testing and building my personal UI test framework project, features and scenarios should be as simple as possible especially when they need to be reviewed and maintained developers and non-techies. To prevent their eyes from glazing over at the site of hundreds of lines of steps, simplify your steps and focus your features. You should always look to start as simple as possible to capture the essence of the feature or scenario. Then if there is a need for more complexity negotiate changes and more detail with the stakeholders.

Focus on Business Functionality, Not Bug Hunting

Our grad used truth tables to draw out permutations of test data and scenarios. He wrote a program that generates all of the feature scenarios and did a generic test fixture that could run them. The problem is this results in thousands of lines of feature specifications that no one is going to read and maintaining them would be a nightmare. There is no opportunity to elicit input from stakeholders on the tests and the value associated with that collaboration is lost. Don’t get me wrong, I like what he did and it was an ingenious solution, but I believe the time he spent on it could have been better served producing tests that could be discussed with humans. I believe he was focused on catching bugs when he should have focused more on proving the system works. His tool was more for exploratory testing when what we need right now is functional testing.

Improve Cross Team Collaboration

It is important for us to find ways to better collaborate with QA, the business, IT, etc. BDD style tests are an excellent vehicle to drive collaboration as they are easy to read and understand and they are even executable by tools like SpecFlow. Additionally, they provide a road map for work that devs need to accomplish and they provide an official definition of done.

Focus, Test One Thing

It is important to separate test context properly. Try to test one thing. If you have a test to assert that a customer record can be viewed don’t also assert that the customer record can be edited as these are two separate tests.

In UI test, your Given should set up the user in the first relevant point of the workflow. If you are testing a an action on the Edit Vendor page, you should set the user up so they are already on the Edit Vendor page. Don’t have steps to go from the login to the View Vendor page and eventually Edit Vendor page as this would be covered in the navigation test of the View Vendor page. Similarly, if I am doing a View Vendor test I would start on the View Vendor page and if I wanted to verify my vendor edit links works, I would click one and assert I am on the Vendor Edit page, without any further testing of Vendor Edit page functionality. One assert per test, the same rules as unit tests.

Limit Dependencies

It may be simpler to take advantage of the FeatureContext.Current and ScenarioContext.Current dictionaries to manage context specific state instead of static members. The statics are good in that they are strongly typed, but they clutter the tests and make it harder to refactor methods to new classes as we have to take the static dependency to the new class when we already have the FeatureContext and ScenarioContext dependency available in all step bindings.

Test Pages and Controls in Isolation

Should we define and test features as a complete workflow or as discrete pieces of a configurable workflow. In ASP.Net web forms, a page or control has distinct entry and exit points. We enter through a Page request and exit through some form of redirect/transfer initiated by a user gesture and sometimes expressed in an event. In terms of the logic driving the page on the server, the page is not necessarily aware of its entry source and may not have knowledge of its exit destination. Sometimes a Session is carried across pages or controls and the Session state can be modified and used in a way that it has an impact on some workflows. Even in this situation we could setup Session state to a known value before we act and assert our scenario. So, we should be able to test pages/controls in isolation without regard to the overall workflow.

This is not to say that we should not test features end to end, but we should be able to test features at a page/control level in isolation. The same way we test individual logic units in isolation in a Unit Test. I would think it would be extremely difficult to test every validation scenario and state transition across an entire workflow, but we can cover more permutations in an isolated test because we only have to account for the permutations in one page/control instead of every page/control in a workflow.

Scope Features and Scenarios

I like how he is name spacing feature files with tags.

@CustomerSite @Vendor @EditVendor
Feature: Edit Vendor…

@EditVendor
Scenario:…

In this example there is a website called Customer Site with a page called Vendor and a feature named Edit Vendor. I am not sure if it is necessary to extend the namespace to the scenarios. I think this may be redundant as Edit Vendor covers the entire feature and every scenario included in it. Granted he does have a mix of context in the feature file (e.g. Edit Vendor and Create Vendor) and he tags the scenarios based on that context of the scenario. As, I think about it more, it may be best to actually extend the entire namespace to the scenario level as it gives fine grained control of test execution as we can instruct the test runner to only run certain tags. (Actually, I did a post on scoping with tags).

Don’t Duplicate Tests

Should we test the operation of common grid functionality in a feature that isn’t specifically about the grid. I mean if we are testing View Customers, is it important to test that the customer grid can sort and page? Should it be a separate test to remove complexity from the View Customer test? Should we also have a test specifically for the Grid Control?

In the end, he did an awesome job and laid a good solid foundation to move our testing framework forward.

Common SpecFlow Steps

To share steps across features you have 2 options that I know of so far. You can create a class that inherits from SpecFlow.Steps, give it a [Binding] attribute, and code steps as normal and they will be usable in all features in the same assembly. This is inherent in the global nature of steps in SpecFlow and you don’t have to do anything to get this behavior.

When you need to share steps across assemblies it gets a little hairy. Say you want a common set of steps that you use for every SpecFlow feature that you write in your entire development life. To do this you will need a bit of configuration. You would create your Common SpecFlow Steps project, create a step class like you did above, reference this project in the test project you want to use it in, then add this little number to the <specflow> section in your configuration file:

<stepAssemblies>

  <stepAssembly assembly=”ExternalStepsAssembly” />

</stepAssemblies>

Just plug in the name of your Common SpecFlow Steps project and you are in business.

GUI Automation

I am building a browser automation framework with Selenium WebDriver, but there are things in the native browsers and the OS that I need to do that Selenium can’t help me with. Since most of my testing is in Windows I could just PowerShell it up, but I was wondering if there was something simpler. Well, I found these projects that I’d like to check out later:

http://www.sikuli.org/ – Sikuli automates anything you see on the screen. It uses image recognition to identify and control GUI components. It is useful when there is no easy access to a GUI’s internal or source code.

http://www.autoitscript.com/ – utoIt v3 is a freeware BASIC-like scripting language designed for automating the Windows GUI and general scripting. It uses a combination of simulated keystrokes, mouse movement and window/control manipulation in order to automate tasks in a way not possible or reliable with other languages (e.g. VBScript and SendKeys). AutoIt is also very small, self-contained and will run on all versions of Windows out-of-the-box with no annoying “runtimes” required!

I am not sure if these can provide the solution I am looking for. There are many more out there, but I already suffer from information overload so one day I will get to work evaluating these.

Selenium WebDriver File Download Testing

Not easy!

When you click a download button with the standard browser configuration you are asked for a location to download the file. Let’s explore the Google Encyclopedia  of copy and paste code to see how we can solve this.

The main solution seems to be setting the browser profile to automatically download without asking for a location. Below I have a little info on what I found out about Firefox, Chrome and IE. I didn’t do a deep dive. Right now I am spiking a solution to see if this will need to be a manual test.

Firefox

From our friends at StackOverflow this posts advocates manually setting the Firefox profile then changing to the profile in Selenium (http://stackoverflow.com/questions/14645877/how-to-test-downloading-files-in-selenium2-using-java-and-then-check-the-downloa)

In firefox also you can do the same thing.

Create one firefox profile

    Change the firefox download setting as it should save files without asking about location to save

    Launch the automation using that profile.

    FirefoxProfile profile = new FirefoxProfile(profileDir);

    driver=new FirefoxDriver(profile);

Another little gem from the same site says I can do one better by configuring the profile in code (http://stackoverflow.com/questions/16746707/how-to-download-any-file-and-save-it-to-the-desired-location-using-selenium-webd):

firefoxProfile.setPreference(“browser.helperApps.neverAsk.saveToDisk”,”text/csv”);

And this post brings it home and puts it all together (http://stackoverflow.com/questions/1176348/access-to-file-download-dialog-in-firefox)

FirefoxProfile firefoxProfile = new FirefoxProfile();

    firefoxProfile.setPreference(“browser.download.folderList”,2);

    firefoxProfile.setPreference(“browser.download.manager.showWhenStarting”,false);

    firefoxProfile.setPreference(“browser.download.dir”,”c:\\downloads”);

    firefoxProfile.setPreference(“browser.helperApps.neverAsk.saveToDisk”,”text/csv”);

    WebDriver driver = new FirefoxDriver(firefoxProfile);//new RemoteWebDriver(new URL(“http://localhost:4444/wd/hub&#8221;), capability);

driver.navigate().to(“http://www.myfile.com/hey.csv&#8221;);

Chrome

There are instructions on achieving the same results as Firefox, but you have to jump threw a few minor hoops to get there (http://stackoverflow.com/questions/15824996/how-to-set-chrome-preferences-using-selenium-webdriver-net-binding).

IE

There seems to be ways to do this in WatiN in browsers below IE9, but for current IE browsers its just plain ugly (http://stackoverflow.com/questions/7500339/how-to-test-file-download-with-watin-ie9/8532222#8532222). I guess I could use one of the GUI Automation tools (see GUI Automation post), but is it really worth all that?

I haven’t tried these at home, so I am naively assuming that they work. Anyway after you have successfully downloaded  the file, you can use C# System.IO to inspect the file format, size, file name, content…you get the picture.