Tagged: specflow

Keyword Driven Testing with Gherkin in SpecFlow

Well this may be a little confusing because Gherkin is essentially a keyword driven test that uses the Given, When, Then keywords. What I am talking about is using Gherkin, specifically the SpecFlow implementation of Gherkin to create another layer of keywords on top of Gherkin to allow users to not only define the tests in plain English with Gherkin, but to also write new test scenarios without having to ask developers to implement the tests.

Even though we can look at the Gherkin scenario steps as keywords many times developers are needed for new tests because the steps can’t be reused to compose tests for new pages without developers implementing new steps. Now this may be an issue with my approach to Gherkin, but many of the examples I have seen suffer from the same problem so I’m in good company.

Keyword Driven Tests

What I was looking for is a way for developers to just create page objects and users can reuse steps to build up new scenarios they want to run without having developers implement new step definitions. This brought me full circle to one of the first test frameworks I wrote several years ago. I built a keyword driven test framework that allowed testers to open an Excel spread sheet, select action keywords, enter data, and select controls to target, to compose new test scenarios without having to involve developers in writing new tests. The spreadsheet looked something like:

Step Keyword Data Control
1 Login charlesb,topsecret
2 Open Contact
3 EnterText Charles Bryant FullName
4 Click Submit

This would be read by the test framework that would use it to drive Selenium to execute the tests. With my limited experience in test automation at the time, this became a maintenance nightmare.

Business users were able to create and run the tests, but there was a lot of duplication because the business had to write the same scenarios over and over again to test different data and expectations for related scenario. If you maintain large automated test suites, you know duplication is your enemy. For the scenario above, if they wanted to test what would happen when FullName was empty, they would have to write these four steps again in a new scenario. If there are more fields, the number of scenarios to properly cover the form could become huge. Then when the business wants to add or remove a field, change the workflow, or any other change that affects all of the duplicated tests, the change has to be done on all of them.

It would have been more maintainable if I would have created more high level keywords like Login and separated data from the scenario step definition, but I wasn’t thinking and just gave up after we had issue after issue that required fixing many scenarios because of problems with duplication. Soon I learned how to overcome this particular issue with Data Driven tests and the trick with encapsulating steps in course grained keywords (methods), but I was way past keyword driven tests and had an extreme hate for them.


You may be asking why am I trying to create a keyword driven framework on top of SpecFlow if I hate keyword driven tests. Well, I have been totally against any keyword driven approach because of my experience and I realized that it may not have been the keyword driven approach in general, but my understanding and implementation of it. I just wanted to see what it would look like and what I could do to make it maintainable. I can appreciate allowing business users to create tests on demand without having to involve developers every step of the way. I am not sold on them wanting to do it and I draw the line on giving them test recorders to write the keyword tests for them (test recorders are another fight I will have with myself later).

So, now I know what it could look like, but I haven’t figured out how I would make it maintainable yet. The source code is on GitHub and it isn’t something anyone should use as it is very naive. If you are looking for a keyword driven approach for SpecFlow, it may provide a base for one way of doing it, but there is a lot to do to make it production ready. There are probably much better ways to implement it, but for a couple hours of development it works and I can see multiple ways of making it better. I probably won’t complete it, but it was fun doing it and taking a stroll down memory lane. I still advocate creating steps that aren’t so fine grained and defined at a higher level of abstraction.

The Implementation

So the approach started with the SpecFlow Feature file. I took a scenario and tried to word it in fine grained steps like the table above.

Scenario: Enter Welcome Text
 Given I am on the "Welcome" page
 And I enter "Hello" in "Welcome"
 When I click "Submit"
 Then I should be on the "Success" page
 And "Header" text should be "Success"

Then I implemented page objects for the Welcome and Success page. Next, I implemented the first Given which allows a user to use this step in any feature scenario and open any page that we have defined as a page object that can be loaded by a page factory. When the business adds a new page a developer just has to add the new page object and the business can compose their tests against the page with the predefined generic steps.

Next, I coded the steps that allow a user to enter text in a control, click a control, verify that a specific page is open, and verify that a control has the specified text. Comparing this to my previous approach, the Keywords are the predefined scenario steps. The Data and Controls are expressed as regex properties in the steps (the quoted items). I would have to define many more keywords for this to be as robust as my previous approach with Excel, but I didn’t have to write an Excel data layer and a complex parsing engine. Yet, this still smells like a maintenance nightmare.

One problem outside of test maintenance is code maintenance. I used hard coded strings in my factories to create or select page and control objects. I could have done some reflection and used well known conventions to create generic object factories. I could have used a Data Driven approach to supply the data for scenarios and users would have to just define the actions in the tests. For example, Given I enter text in “Welcome”. Then they would define the test data in a spreadsheet or JSON file and the data can easily be changed for different environments or situations (like scalability tests). With this more generic example step the implementation would be smart enough to get the text data that needs to be entered for the current scenario from the JSON file. This was another problem with my previous keyword driven approach because I didn’t separate data from the tests so to move to another environment or provide data for different situations meant copying the Excel files and updating the data for each scenario that needed to change.


Well, that’s if for now. Maybe I can grow to love or at least like this type of keyword driven testing.

Sample Project: https://github.com/charleslbryant/specflowkeyworddriventest

Agile Browser Based Testing with SpecFlow

This may be a little misleading as you may think that I am going to give some sort of Scrumish methodology for browser based testing with SpecFlow. Actually, this is more about how I implemented a feature to make browser based testing more realistic in a CI build.

Browser based testing is slow, real slow. So, if you want to integrate this type of testing into your CI build process you need a way to make the tests run faster or it may add considerable time to the feedback loop given to developers. My current solution is to only run tests for the current sprint. To do this I use a mixture of SpecFlow and my own home grown test framework, TestPipe, to identify the tests I should run and ignore.

The solution I’m using at the moment centers on SpecFlow Tags. Actually, I have blogged about this before in my SpecFlow Tagging post. In this post I want to show a little more code that demonstrates how I accomplish it.

Common Scenario Setup

The first step is to use a common Scenario setup method. I add the setup as a static method to a class accessible by all Step classes.

public class CommonStep
 public static void SetupScenario()
 catch (CharlesBryant.TestPipe.Exceptions.IgnoreException ex)

TestPipe Runner Scenario Setup

The method calls the TestPipe Runner method SetupScenario that handles the Tag processing. If SetupScenario determines that the scenario should be ignored, it will throw the exception that is caught. We handle the exception by Asserting the the test ignored with the test frameworks ignore method (in this case NUnit). We also pass the ignore method the exception method as there are a few reasons why a test may be ignored and we want the reason included in our reporting.

SetupScenario includes this bit of code

if (IgnoreScenario(tags))
 throw new IgnoreException();

Configuring Tests to Run

This is similar to what I blogged about in the SpecFlow Tagging post, but I added a custom exception. Below are the interesting methods for the call stack walked by the SetupScenario method.

public static bool IgnoreScenario(string[] tags)
 if (tags == null)
 return false;
string runTags = GetAppConfigValue("test.scenarios");
runTags = runTags.Trim().ToLower();
return Ignore(tags, runTags);

This method gets the tags that we want to run from configuration. For each sprint there is a code branch and the app.config for tests in the branch will contain the tags for the tests we want to run for the sprint on the CI build. There is also a regression branch that will run all the tests which runs weekly. All of the feature files are kept together, so being able to tag specific scenarios in a feature file to run gives the ability to run specific tests for a sprint while keeping all of the features together.

Test Selection

Here is the selection logic.

public static bool Ignore(string[] tags, string runTags)
 if (string.IsNullOrWhiteSpace(runTags))
 return false;
//If runTags has a value the tag must match or is ignored
 if (tags == null)
 throw new IgnoreException("Ignored tags is null.");
 if (tags.Contains("ignore", StringComparer.InvariantCultureIgnoreCase))
 throw new IgnoreException("Ignored");
if (tags.Contains("manual", StringComparer.InvariantCultureIgnoreCase))
 throw new IgnoreException("Manual");
if (runTags == "all" || runTags == "all,all")
 return false;
if (tags.Contains(runTags, StringComparer.InvariantCultureIgnoreCase))
 return false;
return true;

This provides the meat of the solution where most of the logic for the solution is. As you can see the exceptions contain messages for when a test is explicitly ignored with the Ignore or Manual tag. The manual tag identifies features that are defined, but can’t be automated. This way we still have a formal definition that can guide our manual testing.

The variable runTags holds the value retrieved from configuration. If the config defines “all” or “all,all”, we run all the tests that aren’t explicitly ignored. The “all,all” is a special case when ignoring test at the Feature level, but this post is about Scenario level ignoring.

The final test is to compare the tags to the runTags config. If the tags include the runTags, we run the test. Any tests that don’t match are ignored. For scenarios this only works for one runTag. Maybe we name the tag after the sprint, a sprint ticket, or whatever it is it has to be unique for the sprint. I like the idea of tagging with a ticket number as it gives tracability to tickets in the project management system.

Improvements and Changes

I have contemplated using a feature file organization similar to SpecLog. They advocate a separate folder to hold feature files for the current sprint. Then I believe, but not sure, that they tag the current sprint feature files so they can be identified and ran in isolation. The problem with this is that the current sprint features have to be merged with the current features after the sprint is complete.

Another question I have asked myself is do I want to allow some kind of test selection through command line parameters. I am not really sure yet. I will put that thought on hold for now or until a need for command line configuration makes itself evident.

Lastly, another improvement would be to allow specifying multiple runTags. We would have to then iterate the run tags and compare or come up with a performant way of doing. Performance would be an issue as this would have to run on every test and for a large project there could be thousands of tests with each test already taking having an inherent performance issue in having to run in a browser.


Well that’s it. Sprint testing can include browser based testing and still run significantly faster than running every test in a test suite.

SpecFlow Tagging

It seems like SpecFlow tagging has been a theme in many of my recent posts. Its a very powerful concept and central to how I control test execution. Hopefully, this will give someone some inspiration. When I learned about tagging everything else seemed to click in regards to my understanding of the SpecFlow framework. Tagging was a bridge for me.

A Tag in SpecFlow is a way to mark features and scenarios. A Tag is an ampersand, @, and the text of the tag. If you tag a feature, it will apply to all feature’s scenarios and tagging a scenario will apply to all of the scenario’s steps.

Out the box SpecFlow implements the @ignore tag. This tag will generate an ignore attribute in your unit test framework. Although this is the only out the box tag, you can create as many tags as you like and there are a lot of cool things you can do with them.

SpecFlow uses tags to generate categories or attributes that can group and control test execution in the unit test framework that is driving your tests. Tags are also used control test execution outside the unit test framework in SpecFlow’s event bindings and scoped bindings. You also have access to tags in your test code through the ScenaroContext.Current.ScenarioInfo.Tags property.

Another benefit of tagging is that the tags can be targeted from the command line. I can run my tests from the command line and indicate what tags I want to run. This means I can control testing on my continuous integration server.

As you can see, tags are very powerful indeed in shaping the execution of your tests. Below I will explain how I am currently using tags in a standardized way in my test framework. My tagging scheme is still a work in progress and I am honing in on the proper balance of tags that will allow good control of the tests without creating accidental complexity and a maintenance nightmare.

In the feature file for UI tests I have the following tags:

  • Test Type – in our environment we run Unit, Integration, Smoke, and Functional tests, in order of size and speed of the tests. A feature definition should only include one test type tag, but there can be situations where a Functional could include lower tests, but no other test types should be mixed. So you could have @Smoke and @Functional, but not @Smoke and @Unit.
  • Namespace – each C# namespace segment is a tag. For example, if I have a namespace of Foo.Bar then my tags would be @Foo @Bar
  • SUT – the system under test is the class or name of the page or control
  • Ticket Number – the ticket number that the feature was created or changed on (e.g. @OFT11294). We prefix the number to better identify this tag as a ticket.
  • Requirement – this is a reference to the feature section in the business requirements document that the feature spec is based on (e.g. @Rq4.4.1.5). We prefix the number to better identify this tag as a requirement.

With the above examples our feature definition would look like this:

@Integration @Foo @Bar @OFT11294 @Rq4.4.1
Feature: Awesome  Thing-a-ma-gig

This allows me to target the running of tests for different namespaces, test types, SUT, ticket numbers, requirements or any combination of them. When a developer deploys changes for a ticket, I can just run the tickets that target the ticket. This is huge in decreasing the feedback cycle for tests. Instead of having to run all the tests, and this could be hours, we can run a subset and get a quicker response on the outcome of the tests.

At the scenario level we want to tag the system under test (SUT). This allows us to run test for a particular  SUT, but it also gives us the flexibility of hooking behavior into our test runs. Say I want to instantiate a specific class for each scenario, if I did a BeforeFeature Hook with no tagging it would apply to every scenario in the test assembly because SpecFlow Hooks are global. With tagging, it will run for scenarios with matching tags.

…Feature File

@Integration @Foo @Bar @OFT11294 @Rq4.4.1
Feature: Awesome  Thing-a-ma-gig

@Thingamagig @Rq4.4.1.5
Scenario: Awesome Thing-a-ma-gig works

…Step Class

public static void ScenarioSetup()
sut = new Thingamagig();

We have the @Ignore tag that we can apply to features and scenarios to signal to the test runner to not run the tagged item. There is also a @Manual tag that functions like the @Ignore tag for features and scenarios that have to be ran manually. I did some custom logic to filter the @Manual tag, but you can find a simple way to do it on this short post on SpecFlow Manual Testing.

In my test framework I have fine grained control of test execution through a little helper class I created. I won’t bore you with all of the code, but basically I use a scoped BeforeFeature binding to call a bit of code that decides if the feature or scenario should be ran or not. Yes this kind of duplicates what SpecFlow and the unit test framework already does, but I am a control freak. This code is dependent on SpecFlow and NUnit.Framework.

if (IgnoreFeature(FeatureContext.Current.FeatureInfo.Tags))

The IgnoreFeature() method will get the tags to run or ignore from a configuration file. If the tag in FeatureContext.Current.FeatureInfo.Tags matches an ignore tag from configuration it will return true. If the tag matches a run tag it will return false. We also include matching to ignore the @Ignore and @Manual, even though there is already support for @Ignore. This same concept applies to scenarios and ScenarioContext.Current.ScenarioInfo.Tags that are evaluated in a global BeforeScenario binding. In the example above I am using Assert.Ignore() to ignore the test. As you probably know Ignore in unit test frameworks is usually just an exception it throws to immediately fail the test. In my actual test framework, I replace Assert.Ignore() with my own custom exception that I throw that will allow the ignored tags to be logged.

With this method of tagged based ignoring using a configuration file, we could add a tag for environment to control test execution by environment. I say this because I have seen many questions about controlling tests by environment. The point is, there are many ways to use tags and the way I use them is just one way. You can tag how you want and there are some great posts out there to give you inspiration on how to use them for your situation.

Pros and Cons of my SpecFlow Tagging Implementation


  • Fine grained control of test execution.
  • Controlling tests through configuration and command line execution.
  • Tractability of tests to business requirements and work tickets.


  • Tags have no IntelliSence in VisualStudio.
  • Tags are static strings so spelling is an issue.
  • When a namespace or SUT name changes we have to remember to change the name in the feature and step files.
  • Tags can get messy and complicating, especially when test covers multiple tickets or features.


First Post as an Automation Engineer

As you probably don’t know, I have been given the new title of Automation Engineer. I haven’t really been doing much automation besides a short demo project I gave a brief presentation on. When, I got the green light to shuck some of my production development duties, as I am still an active Production Developer, and concentrate on automation I decided to start with an analysis of the current automation framework.

My first task was to review the code of our junior developer (now a very competent developer “sans junior”). He was hired as part of our grad program and was tasked with automating UI tests for our public facing websites. We didn’t have any real UI automation before he started working on it so there was no framework to draw from and he was basically shooting from the hip. He has been guided by our dev manager and has received some input from the team, but he was basically given a goal and turned loose.

He actually did a great job, but having no experience in automation there were bound to be issues. This would even hold true for the most seasoned developer. This post is inspired by a code review of his code. First let’s set some context. We are using Selenium WebDriver, SpecFlow, NUnit, and the Page Object Model pattern. I can’t really show any code as its private, but as you will see from my brain dump below that it allowed me to think about some interesting concepts (IMHO).

Keep Features and Scenarios Simple

I am no expert on automated testing, yet, but in my experience with UI testing and building my personal UI test framework project, features and scenarios should be as simple as possible especially when they need to be reviewed and maintained developers and non-techies. To prevent their eyes from glazing over at the site of hundreds of lines of steps, simplify your steps and focus your features. You should always look to start as simple as possible to capture the essence of the feature or scenario. Then if there is a need for more complexity negotiate changes and more detail with the stakeholders.

Focus on Business Functionality, Not Bug Hunting

Our grad used truth tables to draw out permutations of test data and scenarios. He wrote a program that generates all of the feature scenarios and did a generic test fixture that could run them. The problem is this results in thousands of lines of feature specifications that no one is going to read and maintaining them would be a nightmare. There is no opportunity to elicit input from stakeholders on the tests and the value associated with that collaboration is lost. Don’t get me wrong, I like what he did and it was an ingenious solution, but I believe the time he spent on it could have been better served producing tests that could be discussed with humans. I believe he was focused on catching bugs when he should have focused more on proving the system works. His tool was more for exploratory testing when what we need right now is functional testing.

Improve Cross Team Collaboration

It is important for us to find ways to better collaborate with QA, the business, IT, etc. BDD style tests are an excellent vehicle to drive collaboration as they are easy to read and understand and they are even executable by tools like SpecFlow. Additionally, they provide a road map for work that devs need to accomplish and they provide an official definition of done.

Focus, Test One Thing

It is important to separate test context properly. Try to test one thing. If you have a test to assert that a customer record can be viewed don’t also assert that the customer record can be edited as these are two separate tests.

In UI test, your Given should set up the user in the first relevant point of the workflow. If you are testing a an action on the Edit Vendor page, you should set the user up so they are already on the Edit Vendor page. Don’t have steps to go from the login to the View Vendor page and eventually Edit Vendor page as this would be covered in the navigation test of the View Vendor page. Similarly, if I am doing a View Vendor test I would start on the View Vendor page and if I wanted to verify my vendor edit links works, I would click one and assert I am on the Vendor Edit page, without any further testing of Vendor Edit page functionality. One assert per test, the same rules as unit tests.

Limit Dependencies

It may be simpler to take advantage of the FeatureContext.Current and ScenarioContext.Current dictionaries to manage context specific state instead of static members. The statics are good in that they are strongly typed, but they clutter the tests and make it harder to refactor methods to new classes as we have to take the static dependency to the new class when we already have the FeatureContext and ScenarioContext dependency available in all step bindings.

Test Pages and Controls in Isolation

Should we define and test features as a complete workflow or as discrete pieces of a configurable workflow. In ASP.Net web forms, a page or control has distinct entry and exit points. We enter through a Page request and exit through some form of redirect/transfer initiated by a user gesture and sometimes expressed in an event. In terms of the logic driving the page on the server, the page is not necessarily aware of its entry source and may not have knowledge of its exit destination. Sometimes a Session is carried across pages or controls and the Session state can be modified and used in a way that it has an impact on some workflows. Even in this situation we could setup Session state to a known value before we act and assert our scenario. So, we should be able to test pages/controls in isolation without regard to the overall workflow.

This is not to say that we should not test features end to end, but we should be able to test features at a page/control level in isolation. The same way we test individual logic units in isolation in a Unit Test. I would think it would be extremely difficult to test every validation scenario and state transition across an entire workflow, but we can cover more permutations in an isolated test because we only have to account for the permutations in one page/control instead of every page/control in a workflow.

Scope Features and Scenarios

I like how he is name spacing feature files with tags.

@CustomerSite @Vendor @EditVendor
Feature: Edit Vendor…


In this example there is a website called Customer Site with a page called Vendor and a feature named Edit Vendor. I am not sure if it is necessary to extend the namespace to the scenarios. I think this may be redundant as Edit Vendor covers the entire feature and every scenario included in it. Granted he does have a mix of context in the feature file (e.g. Edit Vendor and Create Vendor) and he tags the scenarios based on that context of the scenario. As, I think about it more, it may be best to actually extend the entire namespace to the scenario level as it gives fine grained control of test execution as we can instruct the test runner to only run certain tags. (Actually, I did a post on scoping with tags).

Don’t Duplicate Tests

Should we test the operation of common grid functionality in a feature that isn’t specifically about the grid. I mean if we are testing View Customers, is it important to test that the customer grid can sort and page? Should it be a separate test to remove complexity from the View Customer test? Should we also have a test specifically for the Grid Control?

In the end, he did an awesome job and laid a good solid foundation to move our testing framework forward.

Common SpecFlow Steps

To share steps across features you have 2 options that I know of so far. You can create a class that inherits from SpecFlow.Steps, give it a [Binding] attribute, and code steps as normal and they will be usable in all features in the same assembly. This is inherent in the global nature of steps in SpecFlow and you don’t have to do anything to get this behavior.

When you need to share steps across assemblies it gets a little hairy. Say you want a common set of steps that you use for every SpecFlow feature that you write in your entire development life. To do this you will need a bit of configuration. You would create your Common SpecFlow Steps project, create a step class like you did above, reference this project in the test project you want to use it in, then add this little number to the <specflow> section in your configuration file:


  <stepAssembly assembly=”ExternalStepsAssembly” />


Just plug in the name of your Common SpecFlow Steps project and you are in business.

SpecFlow Manual Testing

Sometimes there are scenarios that can only be tested manually. Maybe you are testing colors or placement of a picture or some other important feature that only human eyes can assert as right or wrong. When I have a manual test definition in my automated test framework I want to tell the framework to ignore it, but still report it so we don’t lose sight of it. I am building features into my test framework to handle ignoring manual testing, but I found this code below that does it easily in SpecFlow.

[Binding, Scope(Tag = “Manual”)]
public class ManualSteps
    [Given(“.*”), When(“.*”), Then(“.*”)]
    public void EmptyStep()

    [Given(“.*”), When(“.*”), Then(“.*”)]
    public void EmptyStep(string multiLineStringParam)

   [Given(“.*”), When(“.*”), Then(“.*”)]
    public void EmptyStep(Table tableParam)

From https://github.com/techtalk/SpecFlow/wiki/Scoped-Bindings

With this any scenario tagged with @Manual will be ignored, but they will still be reported in the test report. Sweet.

SpecFlow Ambiguous Step Definitions

It’s been a long time since I posted anything. I have a ton of material to post, just been too busy or lazy to post it.

Anyway, here is the problem. I use SpecFlowSelenium WebDriver, and the Page Object Model pattern to implement UI tests. I want to scope my SpecFlow Step Definitions and I ran into this link that made me think twice about doing it. https://github.com/cucumber/cucumber/wiki/Feature-Coupled-Step-Definitions-%28Antipattern%29

The basic premise is that you shouldn’t tie your step definitions to features.

Feature-coupled step definitions are step definitions that can’t be used across features or scenarios. This is evil because it may lead to an explosion of step definitions, code duplication and high maintenance costs.

I can agree with this, but there should be a way to tie a step to a context. The best example of what I mean is when a step is only relevant to a page or control when doing UI tests. When you have a generic step definition, but the implementation can be specific to a page or control it makes sense to be able to scope the step to the page or control.  For example, if we take the scenario from the wiki page above

Scenario: add description
  Given I have a CV and I’m on the edit description page
  And I fill in “Description” with “Cucumber BDD tool”
  When I press Save
  Then I should see “Cucumber BDD tool” under “Descriptions”
(Note: Save is a regular expression on the wiki, but I rely on page object models so it isn’t necessary to pass the value to the step method as my actions are explicit.)

The “When I press Save” step is going to call a method in a page object to actually press the Save button. The step will use a specific page object to execute the step and this generic definition does not provide any context to say which page object to use. If I could scope the step definitions and implementations to a particular page or control, I could have various implementations to target various page objects to drive the page or control under test.

With this we are not coupling by feature, but by page or control. Is this bad or another anti-pattern? Time will tell, but I have had the hardest time trying to name steps with context identifiers to try to get around the problem of step definitions having a global scope in SpecFlow. If I had another scenario that used the “When I press Save” definition, but is implemented with a different page object we run into ambiguity issues because SpecFlow doesn’t know which implementation to use. Without a scoping mechanism I have to add context to the step definitions. Our simple definition would become, “When I press Save on the CV Edit Description page”. This usually makes defining steps and reading them a lot harder than it should be because I have to use more words.

As a general practice, I normally scope my features and scenarios with a tag indicating the page or control under test and this could easily be used in the step definitions to couple step implementations to specific pages and controls. With SpecFlow we can use a feature called Scoped bindings to achieve page/control scoped step definitions.

The Scope Attribute can be used to restrict the execution of step definitions by features, scenario, or by tag. Since scoping by feature is an anti-pattern we won’t use that one. The scenario is a viable restriction, but I believe tag will provide the most flexibility as we can restrict multiple scenarios across various features without limiting the step to a particular feature or scenario. Although, there is the problem of tags only being applied at the scenario level. We can not tag a scenario step in SpecFlow, i.e. tag the Given, When and Then separately. I am not sure if this would be necessary. I have to get more specs written with scoped bindings to see what troubles I run into.

You can look at the Scoped bindings link for usage, but in our scenario above we could use this technique by tagging the scenario with the page name:

Scenario: add description
  Given I have a CV and I’m on the edit description page
  And I fill in “Description” with “Cucumber BDD tool”
  When I press Save
  Then I should see “Cucumber BDD tool” under “Descriptions”

Then the “When I press Save” step can be scoped to the CV Edit Description page like so:

[When (@”I press Save”, Scope(Tag = “CVEditDescription”)]
public void WhenIPressSave()
     //Call Save on the page object

We also get the added benefit of being able to run tests just for this page (tag). So far I like it. How do you solve ambiguity issues in SpecFlow?