Tagged: automation engineering
Build Once, Deploy Everywhere
We were faced with an interesting problem. We want to build once and deploy the build to multiple environments for testing, staging and ultimately consumption in production. Well in addition to build once, we also want to allow remote debugging. We need to build in debug mode to get pdb generated and have other configurations that will allow debugging. Yet, we don’t want to deploy a debug build to staging or production. What do we do?
The thought right now is to do a debug build and create to packages, one for debug and one for release. To do this we would have to strip out the pdb and turn on debugging in the release package. So, we still have one build, but we have two flavors of packages.
I am not yet sure if this is viable, hence the reason I am blogging this out. First I need to fully understand the difference between a debug and release build in MSBuild. I have an idea, but I need to verify my assumptions.
Difference Between Debug and Release Build
What I have found is the main difference between debug and release building are:
- Debug build generates pdb files.
- Release build instructs the compiler to use JIT optimizations.
PDB files are symbol databases that allow a debugger to map machine code to source code (actually MSIL) so you can set break points in source code and the debugger can halt execution of the machine code. This is probably a terrible explanation, but you should get the gist.
JIT optimizations are things the compiler does to speed up the execution of your code. It may reorganize loops to make them run faster and other little magic tricks that happen under the cover that we usually never have to worry about.
Scott Hanselman has an interesting post on this, http://www.hanselman.com/blog/DebugVsReleaseTheBestOfBothWorlds.aspx. This posts suggest that you could do a release build and configure the runtime with an ini file that would determine if JIT optimizations are performed or tracking information is generated.
http://msdn.microsoft.com/en-us/library/9dd8z24x(v=vs.110).aspx this post explains more about the ini.
Now What?
After doing this research I learned a lot about building .Net code, but I also realized that I am taking this a little to far. My primary goal is that we build our application once and use that build in multiple environments to get it tested and deployed to production. When we need to do a remote debug we are using researching an issue and there is no reason that we couldn’t flip a switch on one particular build so that it builds in debug mode, deploy it to a test environment, debug it, make some fixes after finding the cause of the issue we are debugging, then flip the switch back to release and build again, this time allowing the build to go all the way to production.
Issues
The problem here is that we need to make sure that we do not allow debug builds to make it into production. My initial thought is to mark debug builds with a version that is tagged with DEBUG. Then I can have logic in the production deploy that checks for the DEBUG tag and fail the deploy if it is present. We can do the same for pdb files and web.config. Specifically, check for inclusion of pdb (we shouldn’t have pdb file in production). We can also have logic that checks for debug=true and other configurations that we don’t want leaking into production.
We would have to alter our deployment pipeline to add a job that will do these checks based on the environment being deployed. We would have to also look at maybe putting debug builds in a different artifact repository to keep them segregated from release candidates. This would also cause another change to the deployment pipeline where we check the release candidate or debug repository based on some setting.
Conclusion
This would be a lot of changing to our pipeline, but I believe it is worth it in the long run. It also prevents us from leaking manual processes into how we build and deploy the app.
TestPipe Test Automation Framework Release Party
Actually, you missed the party I had with myself when I unchecked private, clicked save on GitHub, and officially release TestPipe. You didn’t miss your chance to checkout TestPipe, a little Open Source project that has the goal of making automated browser based testing more maintainable for .NET’ters. The project source code is hosted on GitHub and the binaries are hosted on NuGet:

First Post as an Automation Engineer
As you probably don’t know, I have been given the new title of Automation Engineer. I haven’t really been doing much automation besides a short demo project I gave a brief presentation on. When, I got the green light to shuck some of my production development duties, as I am still an active Production Developer, and concentrate on automation I decided to start with an analysis of the current automation framework.
My first task was to review the code of our junior developer (now a very competent developer “sans junior”). He was hired as part of our grad program and was tasked with automating UI tests for our public facing websites. We didn’t have any real UI automation before he started working on it so there was no framework to draw from and he was basically shooting from the hip. He has been guided by our dev manager and has received some input from the team, but he was basically given a goal and turned loose.
He actually did a great job, but having no experience in automation there were bound to be issues. This would even hold true for the most seasoned developer. This post is inspired by a code review of his code. First let’s set some context. We are using Selenium WebDriver, SpecFlow, NUnit, and the Page Object Model pattern. I can’t really show any code as its private, but as you will see from my brain dump below that it allowed me to think about some interesting concepts (IMHO).
Keep Features and Scenarios Simple
I am no expert on automated testing, yet, but in my experience with UI testing and building my personal UI test framework project, features and scenarios should be as simple as possible especially when they need to be reviewed and maintained developers and non-techies. To prevent their eyes from glazing over at the site of hundreds of lines of steps, simplify your steps and focus your features. You should always look to start as simple as possible to capture the essence of the feature or scenario. Then if there is a need for more complexity negotiate changes and more detail with the stakeholders.
Focus on Business Functionality, Not Bug Hunting
Our grad used truth tables to draw out permutations of test data and scenarios. He wrote a program that generates all of the feature scenarios and did a generic test fixture that could run them. The problem is this results in thousands of lines of feature specifications that no one is going to read and maintaining them would be a nightmare. There is no opportunity to elicit input from stakeholders on the tests and the value associated with that collaboration is lost. Don’t get me wrong, I like what he did and it was an ingenious solution, but I believe the time he spent on it could have been better served producing tests that could be discussed with humans. I believe he was focused on catching bugs when he should have focused more on proving the system works. His tool was more for exploratory testing when what we need right now is functional testing.
Improve Cross Team Collaboration
It is important for us to find ways to better collaborate with QA, the business, IT, etc. BDD style tests are an excellent vehicle to drive collaboration as they are easy to read and understand and they are even executable by tools like SpecFlow. Additionally, they provide a road map for work that devs need to accomplish and they provide an official definition of done.
Focus, Test One Thing
It is important to separate test context properly. Try to test one thing. If you have a test to assert that a customer record can be viewed don’t also assert that the customer record can be edited as these are two separate tests.
In UI test, your Given should set up the user in the first relevant point of the workflow. If you are testing a an action on the Edit Vendor page, you should set the user up so they are already on the Edit Vendor page. Don’t have steps to go from the login to the View Vendor page and eventually Edit Vendor page as this would be covered in the navigation test of the View Vendor page. Similarly, if I am doing a View Vendor test I would start on the View Vendor page and if I wanted to verify my vendor edit links works, I would click one and assert I am on the Vendor Edit page, without any further testing of Vendor Edit page functionality. One assert per test, the same rules as unit tests.
Limit Dependencies
It may be simpler to take advantage of the FeatureContext.Current and ScenarioContext.Current dictionaries to manage context specific state instead of static members. The statics are good in that they are strongly typed, but they clutter the tests and make it harder to refactor methods to new classes as we have to take the static dependency to the new class when we already have the FeatureContext and ScenarioContext dependency available in all step bindings.
Test Pages and Controls in Isolation
Should we define and test features as a complete workflow or as discrete pieces of a configurable workflow. In ASP.Net web forms, a page or control has distinct entry and exit points. We enter through a Page request and exit through some form of redirect/transfer initiated by a user gesture and sometimes expressed in an event. In terms of the logic driving the page on the server, the page is not necessarily aware of its entry source and may not have knowledge of its exit destination. Sometimes a Session is carried across pages or controls and the Session state can be modified and used in a way that it has an impact on some workflows. Even in this situation we could setup Session state to a known value before we act and assert our scenario. So, we should be able to test pages/controls in isolation without regard to the overall workflow.
This is not to say that we should not test features end to end, but we should be able to test features at a page/control level in isolation. The same way we test individual logic units in isolation in a Unit Test. I would think it would be extremely difficult to test every validation scenario and state transition across an entire workflow, but we can cover more permutations in an isolated test because we only have to account for the permutations in one page/control instead of every page/control in a workflow.
Scope Features and Scenarios
I like how he is name spacing feature files with tags.
@CustomerSite @Vendor @EditVendor
Feature: Edit Vendor…
@EditVendor
Scenario:…
In this example there is a website called Customer Site with a page called Vendor and a feature named Edit Vendor. I am not sure if it is necessary to extend the namespace to the scenarios. I think this may be redundant as Edit Vendor covers the entire feature and every scenario included in it. Granted he does have a mix of context in the feature file (e.g. Edit Vendor and Create Vendor) and he tags the scenarios based on that context of the scenario. As, I think about it more, it may be best to actually extend the entire namespace to the scenario level as it gives fine grained control of test execution as we can instruct the test runner to only run certain tags. (Actually, I did a post on scoping with tags).
Don’t Duplicate Tests
Should we test the operation of common grid functionality in a feature that isn’t specifically about the grid. I mean if we are testing View Customers, is it important to test that the customer grid can sort and page? Should it be a separate test to remove complexity from the View Customer test? Should we also have a test specifically for the Grid Control?
In the end, he did an awesome job and laid a good solid foundation to move our testing framework forward.