Category: Pipeline

TestPipe Test Automation Framework Release Party

Actually, you missed the party I had with myself when I unchecked private, clicked save on GitHub, and officially release TestPipe. You didn’t miss your chance to checkout TestPipe, a little Open Source project that has the goal of making automated browser based testing more maintainable for .NET’ters. The project source code is hosted on GitHub and the binaries are hosted on NuGet:

 

 

If you would like to become a TestPipe Plumber and contribute, I’ll invite you to the next party :).

 

Results of my personal make a logo in 10 minutes challenge. Image
 
 

ThoughtWorks Go Continuous Delivery, Now In Open Source Flavor

If you haven’t heard the ThoughtWorks Go Continuous Delivery server is now open source. The source code is located on GitHub, https://github.com/gocd/gocd. I decided to give it a test drive and I was pleased. Since I am primarily a Windows developer my point of reference is CCNET (which is based on ThoughtWorks Cruise Control Continuous Build server), TFS Team Build and TeamCity. I don’t have a lot of TeamCity experience, but I can say that I can easily see automating many scenarios in Go that I was having a hard time conceiving in CCNET. Adding the concept of Environments, Pipelines, Stages, and User Roles opened an easier path to automated production deployment for me.

Install

Install was pretty simple. Go is cross platform, but I have a Windows server. I downloaded the Windows packages from http://www.go.cd/download/. I installed the server and agent on my server, opened it up in a browser, and it was there ready to go. Very easy, only a few minutes of clicking and I was ready to start. Before I started building pipelines, I made a few customization for my environment. I want to use Git and NAnt in my build, test, deploy process so I added the path to their exe’s to the Path (Windows system environment variable). This makes it less painless to run them from Go.

Server User Authentication

I am eventually going to use LDAP for user authorization, but for now I setup an htpasswd file with usernames and SHA-1 hashed passwords. Then I entered the name of the file, htpasswd.txt, in the server configuration (Admin > Server Configuration > User Management > Password File Settings). I generated the contents for the SHA-1 hashed password file on http://aspirine.org/htpasswd_en.html, but I could have easily just used a Crypto library to hash the passwords myself. Usernames are not case sensitive, but you shouldn’t have colon, spaces or equal sign in the username unless you escape them with backslash.

Configuration

The Go configuration is stored in an XML file, like Jenkins and CCNET. I know many people have a disdain for XML, but it doesn’t bother me and it makes Go portable. I can deploy it to another server or even a developer workstation, use a common config file, and its ready to start processing pipelines. You can use the UI to configure most of what you want to do, but I enjoy the fine grain control in editing the XML directly. There is an XML validator so when my error prone fingers type the wrong character it will automatically reject the change and continue using the current configuration. Since the configuration is XML, I decided to put the file under source control. The reason for this is to have a backup of the config and to allow the ability to config the server from XML and automatically push the changes to the Go server with the Go server (sweet). This doesn’t work both ways, so if there are changes made through the UI they won’t be pushed to source control (although I can envision some convoluted solution for this). For now, I am the only person managing the server and I will configure through the XML file and not the UI.

Pipelines

Pipelines are a unit of organization in Go. A pipeline is comprised of stages which is comprised of jobs which is comprised of tasks. Tasks are the basic unit of work in Go. In my instance most of my tasks are NAnt tasks that call targets in nant build scripts. There are all kinds of ways to create chains of actions and dependencies. This is going to probably be where I focus a lot of attention as this is where the power of the system lies, IMHO. Being able to customize the pipelines and wire up various dependencies is huge for me. Granted I could do this in CCNET to a certain degree, but Go just made it plain to envision and implement.

NAnt Problems

Working with Nant was a pain. Actually, this was the only major hurdle I had to cross. I couldn’t figure out how to pass properties to the nant build file. Then I decided to try to pass the properties through the target argument of the Go nant task, like this

<nant buildfile=”testcode\test\this-is-my-buildfile.xml” target=”-D:this-is-a-nant-property=&quot;dev&quot; -D:another-nant-property=&quot;jun&quot; ThisIsMyNantTarget” />

Note: Paths are relative to your Agent pipeline working directory.

This worked great, but a more intuitive way of doing this would have been good. Maybe an arguments property so there is no confusion between nant property and nant target.

Conclusion

I know this post is lite on details, but I just wanted to get a quick brain dump of my experience with Go. Go has pretty good documentation on the Go.cd website and posting questions to support elicited pretty fast feedback for a free product. I am excited to get involved with the Go and the Go Community. Overall, it was very easy to get a powerful Continuous Delivery server up and running in no time. You should check it out.

Trust No One or a Strange Automated Test

Nullius in verba (Latin for “on the word of no one” or “Take nobody’s word for it”)
http://en.wikipedia.org/wiki/Nullius_in_verba

This is the motto for the Royal Society, UK’s Academy of Science. I bring this up because I inherited an automated test suite and I am in the process of clearing current errors and developing a maintenance plan for them. As I went through the test I questioned whether I could trust them.  In general its difficult to trust automated tests and its worse when I didn’t write them. Then I remembered “nullius in verba” and decided that although I will run these tests, fix them and maintain them, I can not trust them. In fact, since I am now responsible for all automated tests I can’t put any value in any test unless I watch them run, understand their purpose, and ascertain the validity of their assumptions. This is not to say that the people that write tests that I maintain cannot be trusted because of incompetence. In fact, many of the tests that I maintain have been crafted by highly skilled professionals. I just trust no one and want to see for myself.

Even after evaluating automated tests, I can’t really trust them because I don’t watch every automated test run. I can’t say for certain that they passed or failed or that they are a false positive. Since I don’t watch every test run I can only hope they are OK. I can’t even trust someone else’s manual testing with the infallibility of man, so I can’t trust an automated check written by an imperfect human. So, I view automated tests like manual test, they are tools in the evaluation of the software under test.

It would be impractical to manually run every test covered by the automated suite so a good set of tests provide more coverage than manual execution alone. One way automated tests provide value is when they uncover issues that point to interesting aspects of the system that warrant further investigation. Failing tests or unusually slow tests can give a marker to focus on in manual exploration of the software. This is only true if the tests are good, like being focused on one concept, not flaky or sometimes passing or failing, and other attributes of a good automated test. If the tests are bad, their failures may not be actual and take away all value from the automated test because I have to waste time instigating them. In fact, having an automated test suite plagued with bad tests can increase the effort required to maintain test so much that it negates any value they provide. The maintainability of a test is a primary criteria that I evaluate when I inherit them from someone else and I have to see for my self if each test is good and maintainable before I can place any value in them.

So, my current stance is to not trust anyone else’s test. Also, I do not elevate automated tests to being the de facto standard that the software works. Yet, I find value in the automated tests as another tool in my investigation of the quality of the software. If they don’t cost much in terms of maintenance or running them, they provide value in my evaluation of software quality.

Nullius in verba

Scientific Exploration and Software Testing

Test ideas by experiment and observation,

build on those ideas that pass the test,

reject the ones that fail.

Follow the evidence wherever it leads

and question everything.

Astronomer Neil deGrasse Tyson, Cosmos, 2014

This was part of the opening monologue to the relaunch of the Cosmos television series. It provides a nice interpretation of the scientific method, but also fits perfectly with one of my new roles as software tester. Neil finishes this statement with

Accept these terms and the cosmos is yours. Now come with me.

It could be said, “Accept these terms and success in software testing is yours.” What I have learned so far about software testing falls firmly in line with the scientific method. I know software testing isn’t as vast as exploring billions of galaxies, but with millions of different pathway through a computer program, software testing still requires similar rigor as any scientific exploration.

Visual Studio Conditional Build Events

I wanted to run a Post Build Event on Release build only. I have never done a conditional event, but found out that it isn’t that difficult. From what I have found so far there are two ways to accomplish this.

If you defined your Post Build Event in the Project Configuration, Build Event screen in Visual Studio, you can add a conditional if statement to define the condition you want the event to run on.

if $(ConfigurationName) == Release (
copy $(TargetPath) $(SolutionDir)\Plugins\$(TargetFileName)
)

In this example I compare the $(ConfigurationName) property to the text “Release”. You could replace this with the name of the build configuration you want to run your post build script on. A note on build events, they are translated to batch files then ran so you could do anything that you could do in a bat in your event (this is a big assumption as I haven’t ran every command in a build event yet, but I strongly suspect that its safe to assume most cases will be OK).

If you define your build event directly in the Project file you could

<PropertyGroup Condition=" '$(Configuration)' == 'Release' ">
    <PostBuildEvent>copy $(TargetPath) $(SolutionDir)\Plugins\$(TargetFileName)</PostBuildEvent>
</PropertyGroup>

If you haven’t used Build Events you should check them out as you can bend your build to your will. You can preprocess files before your build, move files after the build, clean directories…basically anything you can do with a batch file you can do in a Build Event, because it is a batch file.

Reference

Build Events – http://msdn.microsoft.com/en-us/library/ke5z92ks.aspx

Batch Files – http://technet.microsoft.com/en-us/library/bb490869.aspx

Build Event Macros – http://msdn.microsoft.com/en-us/library/42x5kfw4.aspx

IE WebDriver Proxy Settings

I recently upgraded to the NuGet version of IE WebDriver (IEDriverServer.exe). I started noticing that when I ran my tests locally I could no longer browse the internet. I found myself having to go into internet settings to reset my proxy. My first thought was that the new patch I just received from corporate IT may have botched a rule for setting the browser proxy. After going through the dance of running tests, resetting proxy, I got pretty tired and finally came to the realization that it must be the driver and not IT.

First stop was to check Bing for tips on setting proxy for WebDriver. Found lots of great stuff for Java, but no help on .Net. Next, I stumbled upon a log message in the Selenium source change log that said, “Adding type-safe Proxy property to .NET InternetExplorerOptions class.” A quick browse of the source code and I had my solution.

In the code that creates the web driver I added a proxy class set to auto detect.

Proxy proxy = new Proxy();
proxy.IsAutoDetect = true;
proxy.Kind = ProxyKind.AutoDetect;

This sets up a new Proxy that is configured for auto detect. Next, I added 2 properties, Proxy and UsePerProcessProxy to the InternetExporerOptions

var options = new OpenQA.Selenium.IE.InternetExplorerOptions
{
     EnsureCleanSession = true,
     Proxy = proxy,
     UsePerProcessProxy = true
};

Proxy is set the the proxy we previously set up. UsePerProcessProxy tells the driver that we want this configuration to be set per process, NOT GLOBALLY, thank you. Shouldn’t this be the default, I’m just saying. EnsureCleanSession, clears the cache when the driver starts, this is not necessary for the Proxy config and is something I already had set.

Anyway, with this set up all we have to do is feed it to the driver.

var webDriver = new OpenQA.Selenium.IE.InternetExplorerDriver(options);

My test coding life is back to normal, for now.

Running SQL Files in C# with SMO

I have used SMO, older versions of SMO, to run SQL in C#, but I wanted to do it a new application I’m writing to help with seeding database for tests. Actually, it was pretty easy and you may ask why not just use ADO or sqlcmd. Well the SQL I want to run has TSQL statements that ADO can’t work with. The sqlcmd tool is awesome from the command line, but I wanted a C# solution and SMO allowed me to have less ceremony to get everything up and running.

First you have to get the SMO DLLs necessary to connect with SQL Server and execute the scripts. I am using the files for SQL Server 2012 and I found the DLLs in C:\Program Files\Microsoft SQL Server\110\SDK\Assemblies\. You will need 3 of them:

  • Microsoft.SqlServer.ConnectionInfo.dll
  • Microsoft.SqlServer.Management.Sdk.Sfc.dll
  • Microsoft.SqlServer.Smo.dll

You can copy them to a common folder in your solution and reference them in the project you will use to code up your SQL file runner. If you are into all the Ninja code stuff, you will probably host them on a private NuGet server. Next, all you need is a little code:

using System;
using System.Data.SqlClient;
using System.IO;
using Microsoft.SqlServer.Management.Common;
using Microsoft.SqlServer.Management.Smo;

public class ExcuteSqlScript
{
	public void FromFile(string filePath, string connectionString)
	{
		FileInfo file = new FileInfo(filePath);
		string script = file.OpenText().ReadToEnd();
		this.FromString(script, connectionString);
		file.OpenText().Close();
	}

	public void FromString(string script, string connectionString)
	{
		SqlConnection connection = new SqlConnection(connectionString);
		Server server = new Server(new ServerConnection(connection));
		server.ConnectionContext.ExecuteNonQuery(script);
	}

	//Finally figured out how to display formatted code, yay!
}

I basically have two methods. One loads the SQL from a file path to the SQL file and the other accepts a string with the SQL you want to run. They are both pretty self explanatory. You just need to supply the file path or SQL string and a connection string and it will execute. You should add some error handling and perhaps tweak the file read security and performance for your situation (see refs below). One thing I will be adding is a method to run all SQL files in a directory or iterate over a config file containing the paths to SQL files that need to be ran.

Anyway, this gives a basis to create a more robust solution. If you need more advanced interaction, like transactions, take a closer look at the API for ServerConnection and read the docs, it wasn’t too hard to get through it as the API is simple.

References

SQL Server Management Objects (SMO) Programming Guide – http://technet.microsoft.com/en-us/library/ms162169.aspx

C# .Net: Fastest Way to Read Text Files – http://blogs.davelozinski.com/curiousconsultant/csharp-net-fastest-way-to-read-text-files

Happy Coding!

Get Deep .NET Code Insight with SonarQube

Mapping My .NET Code Quality Pipeline with SonarQube

In this throwback Tuesday post is a draft post from 2013 that I updated the post to use the latest SonarQube. I got the new server running, but SonarQube is not currently a part of our production pipelines. Actually, I think it is a lot easier to run the Docker image for this (docker pull sonarqube:latest). Although, doing it the hard way was a fun trip down memory lane.

Lately, I’ve been sharing updates about my Code Quality Pipeline. Today, I’m thrilled to report that the core pipeline is nearly operational. What’s even more exciting is that I’ve integrated SonarQube, a powerful tool to monitor and analyze code quality. For those unfamiliar, here’s how SonarQube defines itself:

SonarQube® is an open-source quality management platform. It is designed to continuously analyze and measure technical quality. This analysis ranges from project portfolios to individual methods. It supports multiple programming languages via plugins, including robust support for Java and .NET.

In this post, I’ll guide you on setting up SonarQube to monitor your Code Quality Pipeline. We will leverage its capabilities for a .NET-focused development environment.


Setting Up SonarQube for .NET: Step-by-Step

To get started, I grabbed the latest versions of the required tools:

The SonarQube Docs was a helpful reference. It has been updated here. I’ll share the specific steps I followed to install and configure SonarQube on a Windows 11 environment.


1. Database Configuration

SonarQube requires a database for storing analysis results and configuration data. Here’s how I set it up on PostgreSQL (reference):

  1. Create an empty database:
    • Must be configured to use UTF-8 charset.
    • If you want to use a custom schema and not the default “public” one, the PostgreSQL search_path property must be set:
      ALTER USER mySonarUser SET search_path to mySonarQubeSchema
  2. Create a dedicated SonarQube user:
    • Assign CREATE, UPDATE, and DELETE permissions.
  3. Update the sonar.properties file with the database connection after unziping the SonarQube package (see below): sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar;SelectMethod=Cursor sonar.jdbc.username=your-sonarqube-user sonar.jdbc.password=your-password

2. Installing the SonarQube Web Server

The SonarQube server handles analysis and provides a web interface for viewing results.

  1. Unzip the SonarQube package.
  2. Open the conf\sonar.properties file and configure:
    • Database connection details (see above).
    • Web server properties: sonar.web.host=0.0.0.0 sonar.web.port=9000 sonar.web.context=/sonarqube
  3. Ensure Java JDK 17 is installed. Any higher and I had issues with SecurityManager.
  4. Start the server by running the batch file: \bin\windows-x86-{your-system}\StartSonar.bat
  5. Verify the server is running by visiting http://localhost:9000 in your browser. The default credentials are: Username: admin Password: admin

3. Adding Plugins for .NET Support

SonarQube’s plugins for .NET projects enhance its ability to analyze C# code quality.

  • Navigate to the Marketplace within the SonarQube web interface.
  • Install the ecoCode – C# language plugin and any additional tools needed for your pipeline.

4. Integrating Sonar Scanner

Sonar Scanner executes code analysis and sends results to the SonarQube server.

  1. Download and extract Sonar Scanner.
  2. Add its bin directory to your system’s PATH.
  3. Configure the scanner by editing sonar-scanner.properties: sonar.host.url=http://localhost:9000 sonar.projectKey=my_project sonar.projectName=My Project sonar.projectVersion=1.0
  4. Run the scanner from the root of your project: sonar-scanner

Monitoring Key Metrics

One of my goals with SonarQube is to track critical operational metrics like:

  • Code Quality: Bugs, vulnerabilities, code smells.
  • Performance: Memory and CPU usage, database load, cache requests.
  • Application Metrics: Web server requests, bandwidth usage, key transactions (e.g., logins, payments, background jobs).

To achieve this, I’ll leverage SonarQube’s dashboards and custom reports. These tools make it easy to visualize and monitor these KPIs in real-time.


The Impact: A Quality-First Development Workflow

With SonarQube integrated, my Code Quality Pipeline is equipped to ensure:

  • Continuous Code Quality: Early detection of bugs and vulnerabilities.
  • Performance Optimization: Proactive monitoring of resource utilization.
  • Improved Collaboration: Shared insights into code quality for the entire team.

Ready to Level Up Your Code Quality?

SonarQube makes it simple to raise the bar on your development processes. Whether you’re optimizing legacy code or building new features, this tool provides the insights you need to succeed.

Start your journey today: Download SonarQube.

Have questions or need guidance? Let me know in the comments—I’d love to hear how you’re leveraging SonarQube in your own pipelines!

Haunted by the ASP.NET White Screen of Death

You ever have a bug that you just can’t put your finger on. Well this incomplete post is a tale of such a bug. I never had a chance to finish the post or the bug hunt. The post stares at me every time I log into my blog, haunting me. So, I had to post it to release it from its torment.

Early one evening I get the “White Screen of Death” in an ASP.NET application. Basically, it is an empty web page, just empty HTML tag when it should be a full blown data driven web page. You scared yet? This isn’t the first time I have seen this in this application. I am pretty sure it is related to other errors as I remember seeing errors in the application log when ever I see the screen in certain scenarios. The problem is, on this evening it happened in a scenarios without logged errors, but that doesn’t mean there are no errors…right?

Next to see if the error may have been captured elsewhere I set off to check the server logs. I check the server event logs for issues. Nothing jumps out at me.  Then I want to check the IIS logs for the page request so I need to turn on failed request logging on the server. I haven’t done this in a while so I Binged it and got a good hit on this post,  http://www.iis.net/configreference/system.applicationhost/sites/sitedefaults/tracefailedrequestslogging. Now I have it installed and configured, I just don’t know how to inspect the resulting traces. Another Bing and I found the answer at http://www.trainsignal.com/blog/iis-7-troubleshooting. OK, tracing is working and I can view the trace files, now I can’t reproduce the error, oh the horror… figures 😦

Finally, I found a failing scenario and I get nothing of any value in the trace. So I run the scenario again, I check the server event logs again, you guessed it, nothing jumps out at me. So far nothing, I don’t see anything obvious, well nothing I’m noticing as I could just be burnt out and missing the obvious (It happens).

I do a little more Binging and the pickings are slim. I get hits on white page issues involving SSLAlwaysNegoClientCert. Seems some people were having an issue with a 413 – Request Entity Too Large error causing the white page (https://communities.bmc.com/docs/DOC-6259), but there is no way this could be my issue because I’m not uploading anything…right? There is no way ViewState is incredibly large…nah…I’ll check anyway. Better safe than sorry and maybe something will finally jump out at me.

So, I need to:

  • Check ViewState, mainly the size
  • Check Network Traffic, view what is being sent and recieved
  • See if I can repro the scenario locally so I can step through it in a debugger

And this ends our post. Sorry for the abrupt cliff hanger with no solution. I never finished the exploration on this as it was a very low priority edge case bug. It does provide some links on IIS tracing and a little insight into my thought process at the time on trying to discover the source of the bug. One thing I have always admired about many of the smart developers I work with is there thought processes and tool sets used in investigating issues. I have always felt that I came up short in my ability to quickly discover the root cause of issues so I have great respect for the software Sherlock Holmes of the world.

Well hindsight is 20/20 and the biggest mistake that I see is I didn’t automate the scenario. If I would have captured the scenario in a test, I could open it right now and continue where I left off, but now I have no idea where to even start to find the scenario that triggered this issue. So, the issue may still be lurking deep in the bowels of the application. This is the real horror in this post. Oh well, I have a lesson learned. Automate my bug hunt scenarios.

Actually, I wrote this some time ago and remember spending about a hour or so running through this drill in vain. So, in an effort not waste something that may be of value later I decided to just post it to stop it from haunting my post list.

Agile Browser Based Testing with SpecFlow

This may be a little misleading as you may think that I am going to give some sort of Scrumish methodology for browser based testing with SpecFlow. Actually, this is more about how I implemented a feature to make browser based testing more realistic in a CI build.

Browser based testing is slow, real slow. So, if you want to integrate this type of testing into your CI build process you need a way to make the tests run faster or it may add considerable time to the feedback loop given to developers. My current solution is to only run tests for the current sprint. To do this I use a mixture of SpecFlow and my own home grown test framework, TestPipe, to identify the tests I should run and ignore.

The solution I’m using at the moment centers on SpecFlow Tags. Actually, I have blogged about this before in my SpecFlow Tagging post. In this post I want to show a little more code that demonstrates how I accomplish it.

Common Scenario Setup

The first step is to use a common Scenario setup method. I add the setup as a static method to a class accessible by all Step classes.

public class CommonStep
 {
 public static void SetupScenario()
 {
 try
 {
 Runner.SetupScenario();
 }
 catch (CharlesBryant.TestPipe.Exceptions.IgnoreException ex)
 {
 Assert.Ignore(ex.Message);
 }
 }

TestPipe Runner Scenario Setup

The method calls the TestPipe Runner method SetupScenario that handles the Tag processing. If SetupScenario determines that the scenario should be ignored, it will throw the exception that is caught. We handle the exception by Asserting the the test ignored with the test frameworks ignore method (in this case NUnit). We also pass the ignore method the exception method as there are a few reasons why a test may be ignored and we want the reason included in our reporting.

SetupScenario includes this bit of code

if (IgnoreScenario(tags))
 {
 throw new IgnoreException();
 }

Configuring Tests to Run

This is similar to what I blogged about in the SpecFlow Tagging post, but I added a custom exception. Below are the interesting methods for the call stack walked by the SetupScenario method.

public static bool IgnoreScenario(string[] tags)
 {
 if (tags == null)
 {
 return false;
 }
string runTags = GetAppConfigValue("test.scenarios");
runTags = runTags.Trim().ToLower();
return Ignore(tags, runTags);
 }

This method gets the tags that we want to run from configuration. For each sprint there is a code branch and the app.config for tests in the branch will contain the tags for the tests we want to run for the sprint on the CI build. There is also a regression branch that will run all the tests which runs weekly. All of the feature files are kept together, so being able to tag specific scenarios in a feature file to run gives the ability to run specific tests for a sprint while keeping all of the features together.

Test Selection

Here is the selection logic.

public static bool Ignore(string[] tags, string runTags)
 {
 if (string.IsNullOrWhiteSpace(runTags))
 {
 return false;
 }
//If runTags has a value the tag must match or is ignored
 if (tags == null)
 {
 throw new IgnoreException("Ignored tags is null.");
 }
 if (tags.Contains("ignore", StringComparer.InvariantCultureIgnoreCase))
 {
 throw new IgnoreException("Ignored");
 }
if (tags.Contains("manual", StringComparer.InvariantCultureIgnoreCase))
 {
 throw new IgnoreException("Manual");
 }
if (runTags == "all" || runTags == "all,all")
 {
 return false;
 }
if (tags.Contains(runTags, StringComparer.InvariantCultureIgnoreCase))
 {
 return false;
 }
return true;
 }

This provides the meat of the solution where most of the logic for the solution is. As you can see the exceptions contain messages for when a test is explicitly ignored with the Ignore or Manual tag. The manual tag identifies features that are defined, but can’t be automated. This way we still have a formal definition that can guide our manual testing.

The variable runTags holds the value retrieved from configuration. If the config defines “all” or “all,all”, we run all the tests that aren’t explicitly ignored. The “all,all” is a special case when ignoring test at the Feature level, but this post is about Scenario level ignoring.

The final test is to compare the tags to the runTags config. If the tags include the runTags, we run the test. Any tests that don’t match are ignored. For scenarios this only works for one runTag. Maybe we name the tag after the sprint, a sprint ticket, or whatever it is it has to be unique for the sprint. I like the idea of tagging with a ticket number as it gives tracability to tickets in the project management system.

Improvements and Changes

I have contemplated using a feature file organization similar to SpecLog. They advocate a separate folder to hold feature files for the current sprint. Then I believe, but not sure, that they tag the current sprint feature files so they can be identified and ran in isolation. The problem with this is that the current sprint features have to be merged with the current features after the sprint is complete.

Another question I have asked myself is do I want to allow some kind of test selection through command line parameters. I am not really sure yet. I will put that thought on hold for now or until a need for command line configuration makes itself evident.

Lastly, another improvement would be to allow specifying multiple runTags. We would have to then iterate the run tags and compare or come up with a performant way of doing. Performance would be an issue as this would have to run on every test and for a large project there could be thousands of tests with each test already taking having an inherent performance issue in having to run in a browser.

Conclusion

Well that’s it. Sprint testing can include browser based testing and still run significantly faster than running every test in a test suite.