Stay “So Fresh, So Clean” in Legacy Application Development

The title may not fit until the end of this post, but bear with me. I work in a large legacy .Net application and I felt it pulling me into the depths of its ASP.NET 1.1’ness and decided to write this post to remind myself that I can keep my skills moving forward and still keep the legacy maintained.

I had a new project where I had to dump the configuration data for customers into a file with a human readable format. The purpose of this project is to allow someone to compare the configuration data from environment to environment to verify the configuration is maintained and correct as it makes its way to production. The comparison is done outside of the project and the system with a third party diff tool. The configuration data is held in database tables so a simple dump of the data from the table to the file is what I set out to do.

There is already a project in the application that handles this type of scenario, but the code is “So Dirty, So Complex” and a nightmare to maintain. It’s also rigid, full of hard dependencies, and so unreliable that no one uses the tool. Hence the reason I was tasked to work on the use case. Since this is such a simple use case I wanted to create new code that will provide a solution that is easier to maintain and extend and get rid of a very small piece of the stench in this legacy app.

There are three basic parts of my solution:

  • Data Retriever – code to retrieve the data, in this instance, from a database
  • Serializer – code to serialize the Data Retriever into the Output
  • Outputter – code to write the Serializer to the output format, in this instance a file

I am using ADO.net and DbDataReader for database agnostic data streaming. The Serializer is currently dependent on DbDataReader, but it would be simple at this point to introduce a Mapper to pre-process the data stream and pass a DTO instead of reader to Serializer. I didn’t do this in the first iteration because it didn’t provide enough value for the time it would have taken to figure out a way to abstract DTOs in a way that made the solution flexible. The Outputter is basic System.IO and there is no interface for it at this point. We could provide an Outputter interface then we could output to other formats say to another database table or posting to a service.

In the Serializer, I decided on JSON as the human readable format, because it is a standard format and easier to read than XML, IMHO. Also, its a chance to bring new concepts into this legacy application that has no exposure, that I know of, to JSON. I tested serialization solutions side by side, a custom JSON serializer I coded by hand and JSON.net. My test was to just through the same data set at both solutions in a test harness and record and compare timing. I was mindful of using some semblance of scientific method, but my test environment runs on my local dev box that has a lot going on, so the results are not gospel and can vary depending on what’s running in the background.

After running my tests and analyzing the results and my limited research I chose to use the custom serializer. Although JSON.net is an awesome framework, the custom was a better fit for this iteration and here are my observations and reasoning why I went this direction:

  • The custom was more than an order of magnitude faster in my unscientific test. With JSON.net there is an additional step to create an IEnumerable<dynamic> to facilitate serialization so we probably went from O(1) to O(2), but I’m not sure without seeing the JSON.net internals. There may also be optimization in my JSON.net usage that could make it faster, but I had to do this project quick and simple. Without going into the gory details here’s the results of average time over 100 iterations of serializing the test data:
    • Custom Time:    00:00:00.0004153
    • JSON.net Time: 00:00:00.2529317
      This difference in time remained near the same in multiple runs of the test.
  • JSON.net output is not formatted. It’s all one line and defeats the human readable aspect of the project. This is probably configurable somehow, but I didn’t research.
  • I don’t need to deserialize and don’t have to worry about the complexity of deserialization. If I did, I would probably go with JSON.Net.
  • I am not sure if we are authorized to use JSON.Net in production.
  • We are serializing flat data from one table or view (no table join object mapping) and we don’t have to worry about the complexities of multi-level hierarchies or I’d choose JSON.net.

In the end even though I tied myself to a specific implementation I built in extendability through abstractions. We can later swap serializers and also build new dumps based on other tables pretty easily. I could see possibly adding a feature to import a dump file for comparison in the system instead of having to use an external tool. This could also be the basis for moving data from system to system in a way that will be much simpler than the previous project. Taking the time to look at multiple solutions presented me with areas that I should think about and prepare for extension without going overboard with abstractions.

The real point is to try more than one thing when trying to find a solution to a problem. Compare solutions to find reasons to use or not use solutions. Don’t pick the first thing that comes your way or comes to mind. Spend a little time learning something new and experimenting or you will rarely learn anything new on your own and will stay a slave of Google (no disrespect as I lean heavily on Google search). This is especially important for engineers dealing with enterprise legacy applications. Don’t let yourself get outdated like a 10 year old broken legacy application. Stay “So Fresh, So Clean”.

SpecFlow Tagging

It seems like SpecFlow tagging has been a theme in many of my recent posts. Its a very powerful concept and central to how I control test execution. Hopefully, this will give someone some inspiration. When I learned about tagging everything else seemed to click in regards to my understanding of the SpecFlow framework. Tagging was a bridge for me.

A Tag in SpecFlow is a way to mark features and scenarios. A Tag is an ampersand, @, and the text of the tag. If you tag a feature, it will apply to all feature’s scenarios and tagging a scenario will apply to all of the scenario’s steps.

Out the box SpecFlow implements the @ignore tag. This tag will generate an ignore attribute in your unit test framework. Although this is the only out the box tag, you can create as many tags as you like and there are a lot of cool things you can do with them.

SpecFlow uses tags to generate categories or attributes that can group and control test execution in the unit test framework that is driving your tests. Tags are also used control test execution outside the unit test framework in SpecFlow’s event bindings and scoped bindings. You also have access to tags in your test code through the ScenaroContext.Current.ScenarioInfo.Tags property.

Another benefit of tagging is that the tags can be targeted from the command line. I can run my tests from the command line and indicate what tags I want to run. This means I can control testing on my continuous integration server.

As you can see, tags are very powerful indeed in shaping the execution of your tests. Below I will explain how I am currently using tags in a standardized way in my test framework. My tagging scheme is still a work in progress and I am honing in on the proper balance of tags that will allow good control of the tests without creating accidental complexity and a maintenance nightmare.

In the feature file for UI tests I have the following tags:

  • Test Type – in our environment we run Unit, Integration, Smoke, and Functional tests, in order of size and speed of the tests. A feature definition should only include one test type tag, but there can be situations where a Functional could include lower tests, but no other test types should be mixed. So you could have @Smoke and @Functional, but not @Smoke and @Unit.
  • Namespace – each C# namespace segment is a tag. For example, if I have a namespace of Foo.Bar then my tags would be @Foo @Bar
  • SUT – the system under test is the class or name of the page or control
  • Ticket Number – the ticket number that the feature was created or changed on (e.g. @OFT11294). We prefix the number to better identify this tag as a ticket.
  • Requirement – this is a reference to the feature section in the business requirements document that the feature spec is based on (e.g. @Rq4.4.1.5). We prefix the number to better identify this tag as a requirement.

With the above examples our feature definition would look like this:

@Integration @Foo @Bar @OFT11294 @Rq4.4.1
Feature: Awesome  Thing-a-ma-gig

This allows me to target the running of tests for different namespaces, test types, SUT, ticket numbers, requirements or any combination of them. When a developer deploys changes for a ticket, I can just run the tickets that target the ticket. This is huge in decreasing the feedback cycle for tests. Instead of having to run all the tests, and this could be hours, we can run a subset and get a quicker response on the outcome of the tests.

At the scenario level we want to tag the system under test (SUT). This allows us to run test for a particular  SUT, but it also gives us the flexibility of hooking behavior into our test runs. Say I want to instantiate a specific class for each scenario, if I did a BeforeFeature Hook with no tagging it would apply to every scenario in the test assembly because SpecFlow Hooks are global. With tagging, it will run for scenarios with matching tags.

…Feature File

@Integration @Foo @Bar @OFT11294 @Rq4.4.1
Feature: Awesome  Thing-a-ma-gig

@Thingamagig @Rq4.4.1.5
Scenario: Awesome Thing-a-ma-gig works

…Step Class

[BeforeScenario(“Thingamagig“)]
public static void ScenarioSetup()
{
sut = new Thingamagig();
}

We have the @Ignore tag that we can apply to features and scenarios to signal to the test runner to not run the tagged item. There is also a @Manual tag that functions like the @Ignore tag for features and scenarios that have to be ran manually. I did some custom logic to filter the @Manual tag, but you can find a simple way to do it on this short post on SpecFlow Manual Testing.

In my test framework I have fine grained control of test execution through a little helper class I created. I won’t bore you with all of the code, but basically I use a scoped BeforeFeature binding to call a bit of code that decides if the feature or scenario should be ran or not. Yes this kind of duplicates what SpecFlow and the unit test framework already does, but I am a control freak. This code is dependent on SpecFlow and NUnit.Framework.

if (IgnoreFeature(FeatureContext.Current.FeatureInfo.Tags))
{
Assert.Ignore();
return;
}

The IgnoreFeature() method will get the tags to run or ignore from a configuration file. If the tag in FeatureContext.Current.FeatureInfo.Tags matches an ignore tag from configuration it will return true. If the tag matches a run tag it will return false. We also include matching to ignore the @Ignore and @Manual, even though there is already support for @Ignore. This same concept applies to scenarios and ScenarioContext.Current.ScenarioInfo.Tags that are evaluated in a global BeforeScenario binding. In the example above I am using Assert.Ignore() to ignore the test. As you probably know Ignore in unit test frameworks is usually just an exception it throws to immediately fail the test. In my actual test framework, I replace Assert.Ignore() with my own custom exception that I throw that will allow the ignored tags to be logged.

With this method of tagged based ignoring using a configuration file, we could add a tag for environment to control test execution by environment. I say this because I have seen many questions about controlling tests by environment. The point is, there are many ways to use tags and the way I use them is just one way. You can tag how you want and there are some great posts out there to give you inspiration on how to use them for your situation.

Pros and Cons of my SpecFlow Tagging Implementation

Pros:

  • Fine grained control of test execution.
  • Controlling tests through configuration and command line execution.
  • Tractability of tests to business requirements and work tickets.

Cons:

  • Tags have no IntelliSence in VisualStudio.
  • Tags are static strings so spelling is an issue.
  • When a namespace or SUT name changes we have to remember to change the name in the feature and step files.
  • Tags can get messy and complicating, especially when test covers multiple tickets or features.

Ref:

Given the Keys to Merge

Today we were told that the Dev on Production Support will be responsible for merging branches. Branching here is a little different than I’m used to. Usually, there is a trunk, main, or master branch that acts as the root that release or feature branches are branched off of. Here release branching is not done from trunk, but a previous branch. So, if we are currently working on release 5.0 and need to start on release 5.1, an new branch will be created off of the 5.0 branch. If there is concurrent development in both branches then the new 5.1 branch needs to be kept in synch with the 5.0 branch, so we merge the 5.0 changes to the 5.1 branch. I am not sure how or even if we merge everything back to trunk or even if we use trunk.

With this scheme when there is a stockpile of changes it is difficult to reconcile all of the potential merge conflicts. If there was a change to the same file of the branches being merged, the merge can get a little hairy trying to reconcile everything. It was decided to merge more often so you don’t have to face a mountain of merge issues. The Production Support Dev will merge branches preferably daily. Since we all rotate Production Support duties, that means the entire team has the keys to merges.

My little brain always has questions and my thought was why don’t we just merge when we have a change complete. If you make a change in a branch, you should issue a merge request to the team (ala Git, even thought we are an SVN shop). A merge request is simply a message to the team asking them to code review the change. If the change passes coded review, the Dev can merge changes related branches. Well it seems that merging is time consuming in our environment. I haven’t done it yet, but our tech lead said that we could merge our own changes if we have time. I assume that this means that under the pressure to get the release complete there is usually no time to merge. I will try it myself and record the time, if I have time :). Although, my production code dev days are limited, so I won’t have a chance to get a lot of opportunities to put it to the test.

Main Line Development

At my previous employer we did main line development. This basically means we developed directly in main (i.e. trunk). Main wasn’t stable and always had the latest changes of the next release. When we deployed a release we would cut a release tag off of the commit in main that was deployed to production. There would always be a tag that contains the currently deployed code.

Any branching outside of the tag release branching was frowned upon and were asked to limit branching (e.g. you better not branch) because merging was thought to be evil. So, we didn’t do concurrent development on multiple releases. The focus was entirely on the current release. When the release was QA validated, we’d cut the release branch and move on to the next release. This meant that as the release slowly came to a close we were sometimes left twiddling our thumbs while we waited on QA validation and release branching.

If we need to do a production hotfix, we would cut branch off of the current production release tag, make the fix, QA validate it, cut a new release tag, merge changes to main.

It was a very clean process, but I remember having a strong desire to cut feature branches so I could work on the next thing.

Feature Toggles

Which bring up Feature Toggles/Flags. Feature Toggles would allow us to get the best of both worlds. Basically, you add a bit of code to indicate if a particular change is active. In fact the active state can be set in configuration, so IT Operations could actually enable and disable new features and changes with a simple config change. With Feature Toggles we get to do main line and concurrent development at the same time. I have only heard of Feature Toggles in the context of other development stacks like Java. I wonder if there are any .Net shops out there successfully using Feature Toggles?

First Post as an Automation Engineer

As you probably don’t know, I have been given the new title of Automation Engineer. I haven’t really been doing much automation besides a short demo project I gave a brief presentation on. When, I got the green light to shuck some of my production development duties, as I am still an active Production Developer, and concentrate on automation I decided to start with an analysis of the current automation framework.

My first task was to review the code of our junior developer (now a very competent developer “sans junior”). He was hired as part of our grad program and was tasked with automating UI tests for our public facing websites. We didn’t have any real UI automation before he started working on it so there was no framework to draw from and he was basically shooting from the hip. He has been guided by our dev manager and has received some input from the team, but he was basically given a goal and turned loose.

He actually did a great job, but having no experience in automation there were bound to be issues. This would even hold true for the most seasoned developer. This post is inspired by a code review of his code. First let’s set some context. We are using Selenium WebDriver, SpecFlow, NUnit, and the Page Object Model pattern. I can’t really show any code as its private, but as you will see from my brain dump below that it allowed me to think about some interesting concepts (IMHO).

Keep Features and Scenarios Simple

I am no expert on automated testing, yet, but in my experience with UI testing and building my personal UI test framework project, features and scenarios should be as simple as possible especially when they need to be reviewed and maintained developers and non-techies. To prevent their eyes from glazing over at the site of hundreds of lines of steps, simplify your steps and focus your features. You should always look to start as simple as possible to capture the essence of the feature or scenario. Then if there is a need for more complexity negotiate changes and more detail with the stakeholders.

Focus on Business Functionality, Not Bug Hunting

Our grad used truth tables to draw out permutations of test data and scenarios. He wrote a program that generates all of the feature scenarios and did a generic test fixture that could run them. The problem is this results in thousands of lines of feature specifications that no one is going to read and maintaining them would be a nightmare. There is no opportunity to elicit input from stakeholders on the tests and the value associated with that collaboration is lost. Don’t get me wrong, I like what he did and it was an ingenious solution, but I believe the time he spent on it could have been better served producing tests that could be discussed with humans. I believe he was focused on catching bugs when he should have focused more on proving the system works. His tool was more for exploratory testing when what we need right now is functional testing.

Improve Cross Team Collaboration

It is important for us to find ways to better collaborate with QA, the business, IT, etc. BDD style tests are an excellent vehicle to drive collaboration as they are easy to read and understand and they are even executable by tools like SpecFlow. Additionally, they provide a road map for work that devs need to accomplish and they provide an official definition of done.

Focus, Test One Thing

It is important to separate test context properly. Try to test one thing. If you have a test to assert that a customer record can be viewed don’t also assert that the customer record can be edited as these are two separate tests.

In UI test, your Given should set up the user in the first relevant point of the workflow. If you are testing a an action on the Edit Vendor page, you should set the user up so they are already on the Edit Vendor page. Don’t have steps to go from the login to the View Vendor page and eventually Edit Vendor page as this would be covered in the navigation test of the View Vendor page. Similarly, if I am doing a View Vendor test I would start on the View Vendor page and if I wanted to verify my vendor edit links works, I would click one and assert I am on the Vendor Edit page, without any further testing of Vendor Edit page functionality. One assert per test, the same rules as unit tests.

Limit Dependencies

It may be simpler to take advantage of the FeatureContext.Current and ScenarioContext.Current dictionaries to manage context specific state instead of static members. The statics are good in that they are strongly typed, but they clutter the tests and make it harder to refactor methods to new classes as we have to take the static dependency to the new class when we already have the FeatureContext and ScenarioContext dependency available in all step bindings.

Test Pages and Controls in Isolation

Should we define and test features as a complete workflow or as discrete pieces of a configurable workflow. In ASP.Net web forms, a page or control has distinct entry and exit points. We enter through a Page request and exit through some form of redirect/transfer initiated by a user gesture and sometimes expressed in an event. In terms of the logic driving the page on the server, the page is not necessarily aware of its entry source and may not have knowledge of its exit destination. Sometimes a Session is carried across pages or controls and the Session state can be modified and used in a way that it has an impact on some workflows. Even in this situation we could setup Session state to a known value before we act and assert our scenario. So, we should be able to test pages/controls in isolation without regard to the overall workflow.

This is not to say that we should not test features end to end, but we should be able to test features at a page/control level in isolation. The same way we test individual logic units in isolation in a Unit Test. I would think it would be extremely difficult to test every validation scenario and state transition across an entire workflow, but we can cover more permutations in an isolated test because we only have to account for the permutations in one page/control instead of every page/control in a workflow.

Scope Features and Scenarios

I like how he is name spacing feature files with tags.

@CustomerSite @Vendor @EditVendor
Feature: Edit Vendor…

@EditVendor
Scenario:…

In this example there is a website called Customer Site with a page called Vendor and a feature named Edit Vendor. I am not sure if it is necessary to extend the namespace to the scenarios. I think this may be redundant as Edit Vendor covers the entire feature and every scenario included in it. Granted he does have a mix of context in the feature file (e.g. Edit Vendor and Create Vendor) and he tags the scenarios based on that context of the scenario. As, I think about it more, it may be best to actually extend the entire namespace to the scenario level as it gives fine grained control of test execution as we can instruct the test runner to only run certain tags. (Actually, I did a post on scoping with tags).

Don’t Duplicate Tests

Should we test the operation of common grid functionality in a feature that isn’t specifically about the grid. I mean if we are testing View Customers, is it important to test that the customer grid can sort and page? Should it be a separate test to remove complexity from the View Customer test? Should we also have a test specifically for the Grid Control?

In the end, he did an awesome job and laid a good solid foundation to move our testing framework forward.

Multiple PostBacks in ASP.NET WebForms

Here are some, not all, reasons why there would be multiple PostBacks on a page:

  • Handling an event multiple times
    • AutoEventWireup = true in Page declaration and manual wire up event in Page Init. Check your codebehind for the old ASP.Net 1 style of registering events in system generate code.
    • Having control’s AutoPostBack = true while also doing an explicit PostBack in another control event.
  • Custom event JavaScript that doesn’t cancel the click event allowing a double post.

Debugging tips:

  • Do an HTTP trace to view the double PostBack requests.
  • Debug and add watch for Request.Form[“__EVENTTARGET”] to find control initiating PostBack.
  • HTML Validate your page, PostBack could be caused by bad markup

Lastly, this is a little hack that may help turn off double PostBacks temporarily.

     <form id=”Form1″ runat=”server” onsubmit=”Q++; if(Q==1){return true;} else { return false;}”>

Common SpecFlow Steps

To share steps across features you have 2 options that I know of so far. You can create a class that inherits from SpecFlow.Steps, give it a [Binding] attribute, and code steps as normal and they will be usable in all features in the same assembly. This is inherent in the global nature of steps in SpecFlow and you don’t have to do anything to get this behavior.

When you need to share steps across assemblies it gets a little hairy. Say you want a common set of steps that you use for every SpecFlow feature that you write in your entire development life. To do this you will need a bit of configuration. You would create your Common SpecFlow Steps project, create a step class like you did above, reference this project in the test project you want to use it in, then add this little number to the <specflow> section in your configuration file:

<stepAssemblies>

  <stepAssembly assembly=”ExternalStepsAssembly” />

</stepAssemblies>

Just plug in the name of your Common SpecFlow Steps project and you are in business.

GUI Automation

I am building a browser automation framework with Selenium WebDriver, but there are things in the native browsers and the OS that I need to do that Selenium can’t help me with. Since most of my testing is in Windows I could just PowerShell it up, but I was wondering if there was something simpler. Well, I found these projects that I’d like to check out later:

http://www.sikuli.org/ – Sikuli automates anything you see on the screen. It uses image recognition to identify and control GUI components. It is useful when there is no easy access to a GUI’s internal or source code.

http://www.autoitscript.com/ – utoIt v3 is a freeware BASIC-like scripting language designed for automating the Windows GUI and general scripting. It uses a combination of simulated keystrokes, mouse movement and window/control manipulation in order to automate tasks in a way not possible or reliable with other languages (e.g. VBScript and SendKeys). AutoIt is also very small, self-contained and will run on all versions of Windows out-of-the-box with no annoying “runtimes” required!

I am not sure if these can provide the solution I am looking for. There are many more out there, but I already suffer from information overload so one day I will get to work evaluating these.

Selenium WebDriver File Download Testing

Not easy!

When you click a download button with the standard browser configuration you are asked for a location to download the file. Let’s explore the Google Encyclopedia  of copy and paste code to see how we can solve this.

The main solution seems to be setting the browser profile to automatically download without asking for a location. Below I have a little info on what I found out about Firefox, Chrome and IE. I didn’t do a deep dive. Right now I am spiking a solution to see if this will need to be a manual test.

Firefox

From our friends at StackOverflow this posts advocates manually setting the Firefox profile then changing to the profile in Selenium (http://stackoverflow.com/questions/14645877/how-to-test-downloading-files-in-selenium2-using-java-and-then-check-the-downloa)

In firefox also you can do the same thing.

Create one firefox profile

    Change the firefox download setting as it should save files without asking about location to save

    Launch the automation using that profile.

    FirefoxProfile profile = new FirefoxProfile(profileDir);

    driver=new FirefoxDriver(profile);

Another little gem from the same site says I can do one better by configuring the profile in code (http://stackoverflow.com/questions/16746707/how-to-download-any-file-and-save-it-to-the-desired-location-using-selenium-webd):

firefoxProfile.setPreference(“browser.helperApps.neverAsk.saveToDisk”,”text/csv”);

And this post brings it home and puts it all together (http://stackoverflow.com/questions/1176348/access-to-file-download-dialog-in-firefox)

FirefoxProfile firefoxProfile = new FirefoxProfile();

    firefoxProfile.setPreference(“browser.download.folderList”,2);

    firefoxProfile.setPreference(“browser.download.manager.showWhenStarting”,false);

    firefoxProfile.setPreference(“browser.download.dir”,”c:\\downloads”);

    firefoxProfile.setPreference(“browser.helperApps.neverAsk.saveToDisk”,”text/csv”);

    WebDriver driver = new FirefoxDriver(firefoxProfile);//new RemoteWebDriver(new URL(“http://localhost:4444/wd/hub&#8221;), capability);

driver.navigate().to(“http://www.myfile.com/hey.csv&#8221;);

Chrome

There are instructions on achieving the same results as Firefox, but you have to jump threw a few minor hoops to get there (http://stackoverflow.com/questions/15824996/how-to-set-chrome-preferences-using-selenium-webdriver-net-binding).

IE

There seems to be ways to do this in WatiN in browsers below IE9, but for current IE browsers its just plain ugly (http://stackoverflow.com/questions/7500339/how-to-test-file-download-with-watin-ie9/8532222#8532222). I guess I could use one of the GUI Automation tools (see GUI Automation post), but is it really worth all that?

I haven’t tried these at home, so I am naively assuming that they work. Anyway after you have successfully downloaded  the file, you can use C# System.IO to inspect the file format, size, file name, content…you get the picture.

SpecFlow Manual Testing

Sometimes there are scenarios that can only be tested manually. Maybe you are testing colors or placement of a picture or some other important feature that only human eyes can assert as right or wrong. When I have a manual test definition in my automated test framework I want to tell the framework to ignore it, but still report it so we don’t lose sight of it. I am building features into my test framework to handle ignoring manual testing, but I found this code below that does it easily in SpecFlow.

[Binding, Scope(Tag = “Manual”)]
public class ManualSteps
{
    [Given(“.*”), When(“.*”), Then(“.*”)]
    public void EmptyStep()
    {
    }

    [Given(“.*”), When(“.*”), Then(“.*”)]
    public void EmptyStep(string multiLineStringParam)
    {
    }

   [Given(“.*”), When(“.*”), Then(“.*”)]
    public void EmptyStep(Table tableParam)
    {
    }
}

From https://github.com/techtalk/SpecFlow/wiki/Scoped-Bindings

With this any scenario tagged with @Manual will be ignored, but they will still be reported in the test report. Sweet.

SpecFlow Ambiguous Step Definitions

It’s been a long time since I posted anything. I have a ton of material to post, just been too busy or lazy to post it.

Anyway, here is the problem. I use SpecFlowSelenium WebDriver, and the Page Object Model pattern to implement UI tests. I want to scope my SpecFlow Step Definitions and I ran into this link that made me think twice about doing it. https://github.com/cucumber/cucumber/wiki/Feature-Coupled-Step-Definitions-%28Antipattern%29

The basic premise is that you shouldn’t tie your step definitions to features.

Feature-coupled step definitions are step definitions that can’t be used across features or scenarios. This is evil because it may lead to an explosion of step definitions, code duplication and high maintenance costs.

I can agree with this, but there should be a way to tie a step to a context. The best example of what I mean is when a step is only relevant to a page or control when doing UI tests. When you have a generic step definition, but the implementation can be specific to a page or control it makes sense to be able to scope the step to the page or control.  For example, if we take the scenario from the wiki page above

Scenario: add description
  Given I have a CV and I’m on the edit description page
  And I fill in “Description” with “Cucumber BDD tool”
  When I press Save
  Then I should see “Cucumber BDD tool” under “Descriptions”
(Note: Save is a regular expression on the wiki, but I rely on page object models so it isn’t necessary to pass the value to the step method as my actions are explicit.)

The “When I press Save” step is going to call a method in a page object to actually press the Save button. The step will use a specific page object to execute the step and this generic definition does not provide any context to say which page object to use. If I could scope the step definitions and implementations to a particular page or control, I could have various implementations to target various page objects to drive the page or control under test.

With this we are not coupling by feature, but by page or control. Is this bad or another anti-pattern? Time will tell, but I have had the hardest time trying to name steps with context identifiers to try to get around the problem of step definitions having a global scope in SpecFlow. If I had another scenario that used the “When I press Save” definition, but is implemented with a different page object we run into ambiguity issues because SpecFlow doesn’t know which implementation to use. Without a scoping mechanism I have to add context to the step definitions. Our simple definition would become, “When I press Save on the CV Edit Description page”. This usually makes defining steps and reading them a lot harder than it should be because I have to use more words.

As a general practice, I normally scope my features and scenarios with a tag indicating the page or control under test and this could easily be used in the step definitions to couple step implementations to specific pages and controls. With SpecFlow we can use a feature called Scoped bindings to achieve page/control scoped step definitions.

The Scope Attribute can be used to restrict the execution of step definitions by features, scenario, or by tag. Since scoping by feature is an anti-pattern we won’t use that one. The scenario is a viable restriction, but I believe tag will provide the most flexibility as we can restrict multiple scenarios across various features without limiting the step to a particular feature or scenario. Although, there is the problem of tags only being applied at the scenario level. We can not tag a scenario step in SpecFlow, i.e. tag the Given, When and Then separately. I am not sure if this would be necessary. I have to get more specs written with scoped bindings to see what troubles I run into.

You can look at the Scoped bindings link for usage, but in our scenario above we could use this technique by tagging the scenario with the page name:

@CVEditDescription
Scenario: add description
  Given I have a CV and I’m on the edit description page
  And I fill in “Description” with “Cucumber BDD tool”
  When I press Save
  Then I should see “Cucumber BDD tool” under “Descriptions”

Then the “When I press Save” step can be scoped to the CV Edit Description page like so:

[When (@”I press Save”, Scope(Tag = “CVEditDescription”)]
public void WhenIPressSave()
{
     //Call Save on the page object
     cvEditDescriptionPage.Save();
}

We also get the added benefit of being able to run tests just for this page (tag). So far I like it. How do you solve ambiguity issues in SpecFlow?