So You Want To Automate Web Based Testing
I am on my 3rd enterprise scale automated test project and figured that I should try to begin the process of distilling some of the lessons learned and best practices I have learned along the way. I am by no means an expert in automated testing, but with my new position as Automation Engineer I plan on being one. In addition to my day job, I am in the process of building an automated functional testing framework, Test Pipe. It’s part of my push to improve my Code Quality Pipeline in my personal projects. So, with all of the focus on automated testing I have right now, it’s a good time to start formalizing the stuff in my head in a way that I can share it with the development community. Don’t expect much from this first post and I am sorry if it gets incoherent. I am writing from the hip and may not have time for a heavy edit, but I have to post it because I said I would post on the 1st. As I get a better understanding of how to express my experience hopefully I will have much better posts coming up.
Test Environment
First I want to talk a little about the test environment. You should be able to build and deploy the application you want to test.This is more for Developers in Test or testers that actually write code. This is not necessary in every situation, but if you are responsible for proving that an application works as required, then you should be able to get the latest version of the app, build it, deploy it, configure it for testing, and test it. I am of the opinion that you should be able to do this manually before you actually get into the nuts and bolts of automating it. It is important to have a firm grasp on the DevOps Pipeline for your application if you are looking to validate the quality of the application’s development.
This isn’t feasible for every application, but having a local test environment is definitely worth the effort. Being able to checkout code, build and deploy it locally gives you visibility into the application that is hard to achieve on a server. Also, having a local environment affords you a more flexible work space to experiment and break stuff without affecting others on your team.
You should also have a server based test environment apart from the development and business QA environment. This could be physical or virtualized servers, but you should have a build server and the application and web servers necessary to deploy your application. Once you have your Quality Pipeline automated, you don’t want developers or QA messing around on the server invalidating your tests.
Test Configuration
Configuring the application for test is a big subject and you should give lots of thought into how best to manage the configuration. To configure the application for test you may want to create the application database, seed it with test data, update the configuration to change from using an actual SMTP server to a fake server that allows you to test email without needing the network…and more. Then there is configuring the type of test. You may want to run smoke tests every time developers check in code, a full functional test nightly, and a full regression before certifying the release as production ready. These tests can mean different things to different people, but the point is you will probably need to configure for multiple types of tests. On top of that, with web based and mobile testing you have to configure for different browsers and operating systems. Like I said configuration is a big subject that deserves a lot of thought and a strategy in order to have an effective Quality Pipeline.
Test Code
I won’t talk about the details of test code here, but a little about test code organization. Most of my test experience is in testing web applications and services on a Microsoft stack, but a well structured test code base is necessary for any development stack. I generally start with 4 basic projects in my test solutions:
- Core – this provides the core functionality and base abstractions for the test framework.
- Pages – this provides an interactive model of the pages and controls under test.
- Data – this provides functionality to manage test data and data access.
- Specs – this provides the definition and implementation of tests and utilizes the other three projects to do its job.
In reality, there are more projects to a robust testing framework, but this is what I consider my base framework and I allow the other projects to be born out of necessity. I may have a projects to provide base and extend d functionality for my browser automation framework to provide browser access to the Pages project. I may have a project to capture concepts for test design to provide better organization and workflow to the Specs project. I will have even more projects as I gain an understanding of how to test the application and my architecture will evolve as I find concepts that I foresee reusing or replacing. A typical start to a new project may look something like the layout below (this is somewhat similar to the TestPipe project I have underway to provide a ready made solution for a base test framework).
- Core
- IPage – contract that all pages must implement.
- BasePage – base functionality that all pages inherit from.
- TestHelper – various helpers for things like workflow, logging, and access to wrappers around testing tools like SpecFlow.
- Cache – an abstraction and wrapper around a caching framework.
- In Test Pipe, I also abstract the browser and page elements (or controls) so I am not dependent on a specific browser automation framework.
- Pages
- Section – I generally organize this project somewhat similar to that of the website sitemap and a section is just a grouping of pages and there would be a folder for each section. It may make more sense for you to organize this by functional grouping, but following the sitemap has worked for me so far.
- Page – page is a model of a page or control and there is one for each page or control that needs to be tested.
- Section – I generally organize this project somewhat similar to that of the website sitemap and a section is just a grouping of pages and there would be a folder for each section. It may make more sense for you to organize this by functional grouping, but following the sitemap has worked for me so far.
- Specs
- Features – I have a secret love affair with SpecFlow and this folder holds all of the SpecFlow Feature files for the features I am testing.
- Under the Features folder I may or may not have Section folders similar to the Pages project, but as I stated this could be functional groupings if it makes more sense to you.
- Steps
- Organizing steps is still a work in progress for me and I am still trying to find an optimal way to organize them. Steps are implementations of feature tests, but they are also sometimes abstract enough that they can be used for multiple features so I may also have a Sections folder, but also a folder for Common or Global folder for global or common steps or other units of organization.
- Features – I have a secret love affair with SpecFlow and this folder holds all of the SpecFlow Feature files for the features I am testing.
- Data
- DTO – these are just plain old objects and they mirror tables in the database (ala active record).
- TestData – this provides translation from DTO to what’s needed in the test. The actual data that is needed in tests can be defined and configured in an XML file, spreadsheet or database and the code in this folder manages seeding the test data in the database and the retrieving the data for the test scenarios based on the needs of the tests defined in the test data configuration.
- DataAccess – this is usually a micro-ORM, PetaPoco, Massive…my current flavor is NPoco. This is just used to move data to and from the database to the DTOs. We don’t need a full blown ORM for our basic CRUD needs, but we don’t want to have to write a bunch of boilerplate code either so micro-ORMs provide a good balance.
Automated Test Framework
I use a unit test framework as my Automated Test Framework. The unit test framework is used as the test runner for the test scenarios steps and uses a browser automation framework like Selenium or Watin (.Net version of Watir) to drive the browser. The browser automation is hidden behind the Page objects, and we will discuss this later. The main take away is that unit test frameworks are not just for unit tests and I call them automated test frameworks.
I use both NUnit and MSTest as my unit test framework. To be honest, I haven’t run into a situation where MSTest was a terrible choice, but I have heard many arguments from the purist out there on why I should use something else if I want to achieve test enlightenment. I use NUnit because I use it at work and there is a strong community for it, but 9 times out of 10 I will use MSTest as my framework if there is no precedence to use another one.
Another benefit of MSTest is it doesn’t need any setup in Visual Studio as it comes out the box with Visual Studio. The integration with Visual Studio and the MS stack is great when you want something quick to build a test framework around. It just works and its one less piece of the puzzle I have to think about.
If you have a requirement that points more in the direction of another unit test framework, as long as it can run your browser automation framework, use it. Like I said I also use NUnit and actually, its not that difficult to set up. To setup NUnit I use NuGet. I generally install the NuGet package along with the downloadable MSI. The NuGet package only includes the framework so I use the MSI to get the GUI setup. With Visual Studio 2013 you also get some integration in that you can run and view results in the VS IDE so you really don’t need the GUI if you run 2013 or you install an integrated runner in lower versions of VS.
In the end, it doesn’t matter what you use for your test runner, but you should become proficient at one of them.
Browser Automation
I use Selenium WebDriver as my browser automation framework, but you could use anything that works for your environment and situation. WaitN is a very good framework and there are probably more. WebDriver is just killing it in the browser automation space right now. In the end, you just need a framework that will allow you to drive a browser from your Automated Test framework. You could roll your own crude driver that makes HTTP requests if you wanted to, but why would you? Actually, there is a reason, but let’s not go there.
Again, I am a .Net Developer, and I use NuGet to install Selenium WebDriver. Actually, there are two packages that I install, Selenium WebDriver and Selenium WebDriver Support Classes. The Support Classes provide helper classes for HTML Select elements, waiting for conditions, and Page Object creation.
Test Design
As you build tests you will discover that patterns emerge and some people go as far as building DSL (domain specific languages) around the patterns to help them define tests in a more structured and simplified manner. I use SpecFlow to provide a more structured means of defining tests. It is an implementation of Cucumber and brings Gerkihn and BDD to .Net development. I use it to generate the unit test code and stubs of the actual steps that the automation test framework code calls to run tests.
In the step stubs that SpecFlow generates for the test automation framework I call page objects that call the browser automation framework to drive the browser as a user would for the various test scenarios. Inside of the steps you will undoubtedly find patterns that you can abstract and you may even go as far as creating a DSL, but you should definitely formalize how you will design your tests.
More .Net specific info. I use NuGet to install SpecFlow also. Are you seeing a pattern here? If you are a .Net developer and you aren’t using NuGet, you are missing something special. In addition to NuGet you will need to download the SpecFlow Visual Studio integration so that you get some of the other goodies like item templates, context menus, and step generation…etc.
Build Server
I use both Team Foundation Server Build and Cruise Control. Again, the choice of platform doesn’t matter, this is just another abstraction in your Quality Pipeline that provides a service. Here the focus is on being able to checkout code from the source code repository and build it and do it all automatically without your intervention. The build server is the director of the entire process. After it builds the code it can deploy the application, kick off the tests and collect and report test results.
I use Nant with Cruise Control to provide automation of the deployment and configuration. TFS can do the same. They both have the ability to manage the running of tests, run static code analysis, and report on the quality of the build.
Conclusion
Well that’s the low down on the parts of my general test framework. Hopefully, I can keep it going and give some insight into the nitty gritty details of each piece.