Category: Pipeline

WinDbg a Real Developers Debugger

WinDbg is something I have never really used, actually I just ran through a couple demos a few years ago. I always see serious engineers using this as their debugger and I wan’t to grow up to be a serious engineer so I set out to learn WinDbg once again. Actually, if you can’t hear the trauma in my voice I am still recovering from the mother of all bugs and I am prepping myself for the next time I get a crazy issue.

I can’t really give you a reason to use WinDbg yet, but if you want a legitimate reason, you can check out this answer on Stack, http://stackoverflow.com/questions/105130/why-use-windbg-vs-the-visual-studio-vs-debugger. I have had my share of ugly debug problems and I want to know if WinDbg will give me more insight. So, I will learn now and the next time one of those debug problems rears its ugly head I will hit it with WinDbg and see what I get.

Install

Try this at your own risk and don’t attempt this at home and all that legal stuff. For me this was Difficult! (with a capital D)

First there isn’t a stand alone version that I could find. So, I had to install the Windows SDK. Then I had to find a version compatible with my environment, Windows 7 and .Net 4. Most links to the SDK redirect to the newest version. After many install, uninstall, Google, install, uninstall Google loops the correct process for my computer was this (it may be different for you).

  1. Uninstall Microsoft Visual C++ 2010 x64 Redistributable and Microsoft Visual C++ 2010 x86 Redistributable
  2. Uninstall Microsoft Visual C++ Compilers 2010 SP1Standard x64 and Microsoft Visual C++ Compilers 2010 SP1Standard x86
  3. Install the Windows 7.1 SDK – http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=8279
  4. Your installation will only be partially complete after the compiler error. Install the Microsoft Visual C++ 2010 Service Pack 1 Compiler Update for the Windows SDK 7.1 – http://www.microsoft.com/en-us/download/confirmation.aspx?id=4422.
  5. Re-run your Windows SDK web installer. Choose the first option to add features to the existing installation.
  6. Re-choose (either under redistributables or common) the desired features, including the Debugger Tools.

If you have errors, I recommend you open the log in notepad and look for the reason for failure and plug it into Google. I was able to resolve a few issues like that and I don’t care to rehash what I went through as its boring as hell and painful to talk about.

On my setup I find WinDbg here – C:\Program Files\Debugging Tools for Windows (x64)

Specify Symbol Location

As you probably know the Visual Studio debugger works with symbols to provide you with information about the source code you are debugging. WinDbg is no different and you should tell it where to find your symbols.

_NT_SYMBOL_PATH
C:\symbols; SRV*C:\symbols*http://msdl.microsoft.com/download/symbols

*Note: This is a global setting and will affect symbol loading in Visual Studio. Found out about this the hard way and got some good info on issues it can cause here, http://blogs.msdn.com/b/mahuja/archive/2008/07/08/resolving-very-slow-symbol-loading-with-vs-2008-during-debugging.aspx.

This tells WinDbg where to look for code symbols. In this example WinDbg would first check the symbols folder, then if not found it would check the CachedSymbols folder, then if it is till not found it would try to download from Microsoft symbols server and store it in the CachedSymbols folder.

Basic Commands

If you want some basic WinDbg commands and a quick start you can check out http://mtaulty.com/communityserver/blogs/mike_taultys_blog/archive/2004/08/03/4656.aspx. I am too lazy to blog about my experience with my basic WinDbg walk through. I will update the blog when I get to really flex WinDbg’s muscles.

Anyway, if you are a brave sole or one of those coding genius types and you actually try WinDbg, I hope it provides you with some extra fire power in your debug arsenal.

Generate Test Service from WSDL

I had the genius idea to create a test service implementation to test our client services. I would use the test service as a fake stand-in for the real services as the real services are managed by third parties. Imagine wanting to test your client calls to the Twitter API without having to get burned by the firewall trying to make a call to an external service.

As usual, it ends up that I am not a genius and my idea is not unique. There isn’t a lot of info on how to do it in the .Net stack, but I did find some discussions on a few Bings. I ended up using wsdl.exe to extract a service interface from the WSDL and implementing the WSDL with simple ASMX files. I won’t go into the details, but this is basically what I did:

    1. Get a copy of the WSDL and XSD from the actual service.
    2. Tell wsdl.exe where you stored these files and what you want to do, which is generate a service interface
      wsdl.exe yourFile.wsdl yourfile.xsd /l:CS /serverInterface
    3.  Then implement the interface as an ASMX (you could do WCF, but I was in a hurry)
    4. Lastly, point your client to your fake service

In your implementation, you can return whatever you are expecting in your tests. You can also capture the HTTP Messages. Actually, the main reason I wanted to do this was to figure out if my SOAP message was properly formatted and I didn’t want to go through all of the trace listener diagnostics configuration with a Fiddler port capture crap. There may be easier ways to do this and even best practices around testing service clients, but this was simple, fast, and easy, IMHO, and it actually opened up more testing opportunities for me as its all code and automatable (is that a word).

New Laptop, New Problems…Mind Your Install Order

I recently moved my local development to a new laptop. Actually, a pretty sweet laptop, solid state drive and everything. My IT department set the laptop up and IIS was not installed, but .Net and Visual Studio were. So there were a few things I had to do to get .Net properly registered in IIS. Here are some symptoms and solutions, just incase you run into these problems or I forget what I did.

SqlException

When trying to use a web app that connects to a database I would get an SqlException,

A connection was successfully established with the server, but then an error occurred during the login process. (provider: SSL Provider, error: 0 – The message received was unexpected or badly formatted.).

A simple command cleared this up.

netsh winsock reset

Actually, I got this fix from a co-worker and he was even good enough to provide some research on the possible cause of the issue, but I didn’t read any of it. I already have too much reading to do. As I understand it, it had something to do with my upgrade to Visual Studio 2013, but don’t quote me.

http://technet.microsoft.com/en-us/library/cc753591(v=WS.10).aspx

http://www.techsupportforum.com/forums/f31/netsh-int-ip-reset-and-netsh-winsock-reset-467970.html

http://support.microsoft.com/kb/299357

WCF Service Issue

When trying to run a service locally I kept getting a 405 error in my UI. Under the covers I was actually getting a 404.3, The page you are requesting cannot be served because of the extension configuration. If the page is a script, add a handler. If the file should be downloaded, add a MIME map.

You can check the reference on this issue here: http://msdn.microsoft.com/en-us/library/ms752252(v=vs.90).aspx

As it turns out, in my situation it was another artifact of the botched install order. I had to register WCF:

  1. Run Visual Studio Command Prompt as “Administrator”
  2. Change directory to C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation
  3. Run
ServiceModelReg –i

For good measure I also registered ASP.NET with IIS by running these commands in the same command prompt (you don’t have to change directory):

aspnet_regiis -I

then reset IIS

iisreset

Actually, I had to do even more. The above actions registered everything on the server, but the settings weren’t copied down to my websites and services. This is another install order issue as I installed the sites before everything was registered. There may be an easier way to fix this, but I updated my DefaultWebsite (this is were I place all my sites and services). In feature view I opened Handler Mapping and in the Actions pane I clicked Revert to Parent. Then I had to do this in my sites and services. Actually, because there are so many and I have some handy dandy automated install/uninstall scripts, I just uninstalled everything and reinstalled all the sites which picked up the configuration change.

Now that order has been restored to my world I can get back to enjoying this new laptop.

So You Want To Automate Web Based Testing

I am on my 3rd enterprise scale automated test project and figured that I should try to begin the process of distilling some of the lessons learned and best practices I have learned along the way. I am by no means an expert in automated testing, but with my new position as Automation Engineer I plan on being one. In addition to my day job, I am in the process of building an automated functional testing framework, Test Pipe. It’s part of my push to improve my Code Quality Pipeline in my personal projects. So, with all of the focus on automated testing I have right now, it’s a good time to start formalizing the stuff in my head in a way that I can share it with the development community. Don’t expect much from this first post and I am sorry if it gets incoherent. I am writing from the hip and may not have time for a heavy edit, but I have to post it because I said I would post on the 1st. As I get a better understanding of how to express my experience hopefully I will have much better posts coming up.

Test Environment

First I want to talk a little about the test environment. You should be able to build and deploy the application you want to test.This is more for Developers in Test or testers that actually write code. This is not necessary in every situation, but if you are responsible for proving that an application works as required, then you should be able to get the latest version of the app, build it, deploy it, configure it for testing, and test it. I am of the opinion that you should be able to do this manually before you actually get into the nuts and bolts of automating it. It is important to have a firm grasp on the DevOps Pipeline for your application if you are looking to validate the quality of the application’s development.

This isn’t feasible for every application, but having a local test environment is definitely worth the effort. Being able to checkout code, build and deploy it locally gives you visibility into the application that is hard to achieve on a server. Also, having a local environment affords you a more flexible work space to experiment and break stuff without affecting others on your team.

You should also have a server based test environment apart from the development and business QA environment. This could be physical or virtualized servers, but you should have a build server and the application and web servers necessary to deploy your application. Once you have your Quality Pipeline automated, you don’t want developers or QA messing around on the server invalidating your tests.

Test Configuration

Configuring the application for test is a big subject and you should give lots of thought into how best to manage the configuration. To configure the application for test you may want to create the application database, seed it with test data, update the configuration to change from using an actual SMTP server to a fake server that allows you to test email without needing the network…and more. Then there is configuring the type of test. You may want to run smoke tests every time developers check in code, a full functional test nightly, and a full regression before certifying the release as production ready. These tests can mean different things to different people, but the point is you will probably need to configure for multiple types of tests. On top of that, with web based and mobile testing you have to configure for different browsers and operating systems. Like I said configuration is a big subject that deserves a lot of thought and a strategy in order to have an effective Quality Pipeline.

Test Code

I won’t talk about the details of test code here, but a little about test code organization. Most of my test experience is in testing web applications and services on a Microsoft stack, but a well structured test code base is necessary for any development stack. I generally start with 4 basic projects in my test solutions:

  • Core – this provides the core functionality and base abstractions for the test framework.
  • Pages – this provides an interactive model of the pages and controls under test.
  • Data – this provides functionality to manage test data and data access.
  • Specs – this provides the definition and implementation of tests and utilizes the other three projects to do its job.

In reality, there are more projects to a robust testing framework, but this is what I consider my base framework and I allow the other projects to be born out of necessity. I may have a projects to provide base and extend d functionality for my browser automation framework to provide browser access to the Pages project. I may have a project to capture concepts for test design to provide better organization and workflow to the Specs project. I will have even more projects as I gain an understanding of how to test the application and my architecture will evolve as I find concepts that I foresee reusing or replacing. A typical start to a new project may look something like the layout below (this is somewhat similar to the TestPipe project I have underway to provide a ready made solution for a base test framework).

  • Core
    • IPage – contract that all pages must implement.
    • BasePage – base functionality that all pages inherit from.
    • TestHelper – various helpers for things like workflow, logging, and access to wrappers around testing tools like SpecFlow.
    • Cache – an abstraction and wrapper around a caching framework.
    • In Test Pipe, I also abstract the browser and page elements (or controls) so I am not dependent on a specific browser automation framework.
  • Pages
    • Section – I generally organize this project somewhat similar to that of the website sitemap and a section is just a grouping of pages and there would be a folder for each section. It may make more sense for you to organize this by functional grouping, but following the sitemap has worked for me so far.
      • Page – page is a model of a page or control and there is one for each page or control that needs to be tested.
  • Specs
    • Features – I have a secret love affair with SpecFlow and this folder holds all of the SpecFlow Feature files for the features I am testing.
      • Under the Features folder I may or may not have Section folders similar to the Pages project, but as I stated this could be functional groupings if it makes more sense to you.
    • Steps
      • Organizing steps is still a work in progress for me and I am still trying to find an optimal way to organize them. Steps are implementations of feature tests, but they are also sometimes abstract enough that they can be used for multiple features so I may also have a Sections folder, but also a folder for Common or Global folder for global or common steps or other units of organization.
  • Data
    • DTO – these are just plain old objects and they mirror tables in the database (ala active record).
    • TestData – this provides translation from DTO to what’s needed in the test. The actual data that is needed in tests can be defined and configured in an XML file, spreadsheet or database and the code in this folder manages seeding the test data in the database and the retrieving the data for the test scenarios based on the needs of the tests defined in the test data configuration.
    • DataAccess – this is usually a micro-ORM, PetaPoco, Massive…my current flavor is NPoco. This is just used to move data to and from the database to the DTOs. We don’t need a full blown ORM for our basic CRUD needs, but we don’t want to have to write a bunch of boilerplate code either so micro-ORMs provide a good balance.

Automated Test Framework

I use a unit test framework as my Automated Test Framework. The unit test framework is used as the test runner for the test scenarios steps and uses a browser automation framework like Selenium or Watin (.Net version of Watir) to drive the browser. The browser automation is hidden behind the Page objects, and we will discuss this later. The main take away is that unit test frameworks are not just for unit tests and I call them automated test frameworks.

I use both NUnit and MSTest as my unit test framework. To be honest, I haven’t run into a situation where MSTest was a terrible choice, but I have heard many arguments from the purist out there on why I should use something else if I want to achieve test enlightenment. I use NUnit because I use it at work and there is a strong community for it, but 9 times out of 10 I will use MSTest as my framework if there is no precedence to use another one.

Another benefit of MSTest is it doesn’t need any setup in Visual Studio as it comes out the box with Visual Studio. The integration with Visual Studio and the MS stack is great when you want something quick to build a test framework around. It just works and its one less piece of the puzzle I have to think about.

If you have a requirement that points more in the direction of another unit test framework, as long as it can run your browser automation framework, use it. Like I said I also use NUnit and actually, its not that difficult to set up. To setup NUnit I use NuGet. I generally install the NuGet package along with the downloadable MSI. The NuGet package only includes the framework so I use the MSI to get the GUI setup. With Visual Studio 2013 you also get some integration in that you can run and view results in the VS IDE so you really don’t need the GUI if you run 2013 or you install an integrated runner in lower versions of VS.

In the end, it doesn’t matter what you use for your test runner, but you should become proficient at one of them.

Browser Automation

I use Selenium WebDriver as my browser automation framework, but you could use anything that works for your environment and situation. WaitN is a very good framework and there are probably more. WebDriver is just killing it in the browser automation space right now. In the end, you just need a framework that will allow you to drive a browser from your Automated Test framework. You could roll your own crude driver that makes HTTP requests if you wanted to, but why would you? Actually, there is a reason, but let’s not go there.

Again, I am a .Net Developer, and I use NuGet to install Selenium WebDriver. Actually, there are two packages that I install, Selenium WebDriver and Selenium WebDriver Support Classes. The Support Classes provide helper classes for HTML Select elements, waiting for conditions, and Page Object creation.

Test Design

As you build tests you will discover that patterns emerge and some people go as far as building DSL (domain specific languages) around the patterns to help them define tests in a more structured and simplified manner. I use SpecFlow to provide a more structured means of defining tests. It is an implementation of Cucumber and brings Gerkihn and BDD to .Net development. I use it to generate the unit test code and stubs of the actual steps that the automation test framework code calls to run tests.

In the step stubs that SpecFlow generates for the test automation framework I call page objects that call the browser automation framework to drive the browser as a user would for the various test scenarios. Inside of the steps you will undoubtedly find patterns that you can abstract and you may even go as far as creating a DSL, but you should definitely formalize how you will design your tests.

More .Net specific info. I use NuGet to install SpecFlow also. Are you seeing a pattern here? If you are a .Net developer and you aren’t using NuGet, you are missing something special. In addition to NuGet you will need to download the SpecFlow Visual Studio integration so that you get some of the other goodies like item templates, context menus, and step generation…etc.

Build Server

I use both Team Foundation Server Build and Cruise Control. Again, the choice of platform doesn’t matter, this is just another abstraction in your Quality Pipeline that provides a service. Here the focus is on being able to checkout code from the source code repository and build it and do it all automatically without your intervention. The build server is the director of the entire process. After it builds the code it can deploy the application, kick off the tests and collect and report test results.

I use Nant with Cruise Control to provide automation of the deployment and configuration. TFS can do the same. They both have the ability to manage the running of tests, run static code analysis, and report on the quality of the build.

Conclusion

Well that’s the low down on the parts of my general test framework. Hopefully, I can keep it going and give some insight into the nitty gritty details of each piece.

Lambdas and Generic Delegates

It took me a century to figure out lambdas and what they are all about. So, as I was thinking about what to blog about next I thought I’d share about the moment that lambdas and me clicked.

When I looked at lambdas as shorthand for anonymous delegates and expressions, I came up with a phrase to help me visualize what’s going on. When I read a lambda I say, “with value of variable return expression.” A Predicate delegate would be expressed as a lambda as x => x == 1 would read, “with value of x return x is equal to 1.” When I made => read as return, I stopped trying to force it into some kind of comparison operator (if you have code dyslexia like me, you know what I’m talking about).

Another way to look at it is in terms of what’is happening under the hood. A lambda is like shorthand for anonymous delegates. In longhand, full declaration, as an anonymous delegate, the above lambda would be,

delegate(int x){return x == 1;}

Generic Delegates

Since I mentioned anonymous delegates, I should giving a passing shout out to Generic Delegates.  I use most often:

Predicate must return true or false. Can accept only one parameter.

Func returns the value specified in the parameter. Can accept 4 parameters in .net 3.5 and 16 in 4.0.

Action would not return a value. Can accept 4 parameters in .net 3.5 and 9 in 4.0.

Comparison returns a signed int indicating the result of comparison. If the returned int is

< 0 then x < y
= 0 then x = y
> 0 then x > y

The Comparison delegate can accept two objects of the same type that will be compared (x and y).

Converter returns the result of an object conversion operation. Can accept one input object (to be converted) and an output (return) object (is the converted input object).

EventHandler returns void. Accepts a System.Object (source of event) and TEventArgs (the event’s data).

Anyway, delegates are a powerful feature of C# that I am guilty of not taking advantage of like I should. Hopefully, doing this little post will help in grain them more into my solution designs. If you haven’t been exposed to or currently using Generic Delegates or Lambda Expressions, I challenge you to see how they can help solve some of your problems in a more abstract, efficient and maintainable way. Try them out you may discover a new way to think about and write code.

Release the Code Quality Hounds

I wanted to start using more quality tools as a part of my initiative to improve my Code Quality Pipeline. So I decided to implement Code Analysis to insure the project maintains certain coding standards. In addition to the FxCop style static analysis of the MSIL code by Visual Studio Code Analysis I decided to also use StyleCop to validate that my source code is up to standard. This would be a major undertaking in most legacy applications, but if you have been following Microsoft coding standards or you’re working in a greenfield app I would definitely recommend implementing these code quality cop bots.

Code Analysis

When you enable Code Analysis for your managed code it will analyze your managed assemblies and reports information about the assemblies, such as violations of the programming and design rules set forth in the Microsoft .NET Framework Design Guidelines.

To get Code Analysis up and running I right clicked on my project and clicked properties. Then I clicked on Code Analysis.

codeanalysis

This screen allows us to configure code analysis. What I wanted to do first is create a custom rule set. This allows me to configure and save the rules I want for code analysis. So, I selected Microsoft All Rules, clicked open, then File > Save As and saved the rule set file in my solution root folder. Then I edited the file in NotePad++ to give it a better name. Then I went back to VS and selected my custom rule set.

To enable Code Analysis on build, I checked the box for Enable Code Analysis on Build. You may want to only do this on your release target or other target that you run before certifying a build production ready. I did this on every project in the solution, including tests as I didn’t want to get lazy with my quality in tests. Having Code Analysis enabled will also cause Code Analysis to run on my Team Build Gated Check-in build as it is set to run Code Analysis as configured on the projects.

Also, I wanted Code Analysis violations treated as errors on build, so I added this to the debug Property Group

 <CodeAnalysisTreatWarningsAsErrors>true</CodeAnalysisTreatWarningsAsErrors>

Lastly, to do a quick test, I right clicked on my Solution in Solution Explorer and clicked “Run Code Analysis on Solution.” Since I had some code already written it did return some issues which I fixed. I then checked the code in to TFS and the code analysis also ran on Team Build.

StyleCop

StyleCop analyzes C# source code to enforce a set of style and consistency rules. First I downloaded and installed StyleCop.

For some reason getting StyleCop up and running wasn’t as easy as it usually is. So I am going to explain an alternate route, but bear in mind that the best route is Nuget.

Actually, I installed StyleCop from Nuget, but it didn’t configure the tooling in Visual Studio properly so I download the package from the project site, http://stylecop.codeplex.com, and reinstalled from the downloaded installer. I tried adding the Nuget package to add the StyleCop MSBuild target, but that too resulted in too many issues. So I followed the instructions in the StyleCop docs to get a custom build target installed for my projects, http://stylecop.codeplex.com/wikipage?title=Setting%20Up%20StyleCop%20MSBuild%20Integration&referringTitle=Documentation.

I decided to show StyleCop violations as errors. So, I added

<StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>

to the project files. This instructs the project to treat StyleCop violations as a build error.

<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
    <ProjectGuid>{077D5FD7-F665-4CB1-947E-8E89D47E2689}</ProjectGuid>
    <OutputType>Library</OutputType>
    <AppDesignerFolder>Properties</AppDesignerFolder>
    <RootNamespace>CharlesBryant.AppNgen.Core</RootNamespace>
    <AssemblyName>AppNgen.Core</AssemblyName>
    <TargetFrameworkVersion>v4.5</TargetFrameworkVersion>
    <FileAlignment>512</FileAlignment>
    <SccProjectName>SAK</SccProjectName>
    <SccLocalPath>SAK</SccLocalPath>
    <SccAuxPath>SAK</SccAuxPath>
    <SccProvider>SAK</SccProvider>
    <StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>
    <StyleCopTargets>..\..\..\StyleCop.Targets</StyleCopTargets>
  </PropertyGroup>

I wanted StyleCop integrated into my local and Team Build so I added the build target to the project files. The build target is installed when you install StyleCop, if you select it. You will have to copy the target from the Program Files/MSBuild folder to a folder within your solution folder so it can be checked into TFS. Then you can point to it in your project files to cause StyleCop to run on build by adding

<StyleCopTargets>..\..\..\StyleCop.Targets</StyleCopTargets>

to your PropertyGroup and

<Import Project="$(StyleCopTargets)" />

If I were you, I would stick with the StyleCop.MSBuild Nuget package to integrate StyleCop in your build. The route I took and explained above required a lot of debugging and reworking that I don’t want to blog about. I actually, tried the Nuget package in another project and it worked perfectly.

Conclusion

In the end, I have Code Analysis and StyleCop running on every build, locally or remote, and violations are treated as errors so builds are prevented when my code isn’t up to standard. What’s in your Code Quality Pipeline?

Visual Studio Architecture Validation

As a part of my Code Quality Pipeline I want to validate my code against my architectural design. This means I don’t want invalid code integrations, like client code calling directly into data access code. With Visual Studio 2012 this is no problem. First I had to create a Modeling Project. Then I captured my architecture as a layer diagram. I won’t go over the details of how to do this, but you can find resources here

http://www.dotnetcurry.com/ShowArticle.aspx?ID=848
http://msdn.microsoft.com/en-us/library/57b85fsc(v=vs.110).aspx

Next I added true to my model project’s .modelproj file. This instructs MSBuild to validate the architecture for each build. Since this is configured at the project level it will validate the architecture against all of the layer diagrams included in the project.

For a simpler way to add the configuration setting here is a MSDN walk through – http://msdn.microsoft.com/en-us/library/dd409395(v=vs.110).aspx#ValidateAuto

1. In Solution Explorer, right-click the modeling project that contains the layer diagram or diagrams, and then click Properties.
2. In the Properties window, set the modeling project’s Validate Architecture property to True.
This includes the modeling project in the validation process.
3. In Solution Explorer, click the layer diagram (.layerdiagram) file that you want to use for validation.
4. In the Properties window, make sure that the diagram’s Build Action property is set to Validate.
This includes the layer diagram in the validation process.

Adding this configuration to the project file only validates my local build. As part of my Quality Pipeline I also want to validate on Team Build (my continuous build server). There was some guideance out there in the web and blogosphere, but for some reason my options did match what they were doing. You can try the solution on MSDN (http://msdn.microsoft.com/en-us/library/dd409395(v=vs.110).aspx#ValidateAuto). Like I said, this didn’t work for me. I had to right click the build definition in Build Explorer and click Edit Build Definition. On the Process tab, under Advanced, I added /p:ValidateArchitecture=true to MSBuild Arguments.

Now my code is guarded against many of the issues that result from implementations that violate the designed architecture.

Stay “So Fresh, So Clean” in Legacy Application Development

The title may not fit until the end of this post, but bear with me. I work in a large legacy .Net application and I felt it pulling me into the depths of its ASP.NET 1.1’ness and decided to write this post to remind myself that I can keep my skills moving forward and still keep the legacy maintained.

I had a new project where I had to dump the configuration data for customers into a file with a human readable format. The purpose of this project is to allow someone to compare the configuration data from environment to environment to verify the configuration is maintained and correct as it makes its way to production. The comparison is done outside of the project and the system with a third party diff tool. The configuration data is held in database tables so a simple dump of the data from the table to the file is what I set out to do.

There is already a project in the application that handles this type of scenario, but the code is “So Dirty, So Complex” and a nightmare to maintain. It’s also rigid, full of hard dependencies, and so unreliable that no one uses the tool. Hence the reason I was tasked to work on the use case. Since this is such a simple use case I wanted to create new code that will provide a solution that is easier to maintain and extend and get rid of a very small piece of the stench in this legacy app.

There are three basic parts of my solution:

  • Data Retriever – code to retrieve the data, in this instance, from a database
  • Serializer – code to serialize the Data Retriever into the Output
  • Outputter – code to write the Serializer to the output format, in this instance a file

I am using ADO.net and DbDataReader for database agnostic data streaming. The Serializer is currently dependent on DbDataReader, but it would be simple at this point to introduce a Mapper to pre-process the data stream and pass a DTO instead of reader to Serializer. I didn’t do this in the first iteration because it didn’t provide enough value for the time it would have taken to figure out a way to abstract DTOs in a way that made the solution flexible. The Outputter is basic System.IO and there is no interface for it at this point. We could provide an Outputter interface then we could output to other formats say to another database table or posting to a service.

In the Serializer, I decided on JSON as the human readable format, because it is a standard format and easier to read than XML, IMHO. Also, its a chance to bring new concepts into this legacy application that has no exposure, that I know of, to JSON. I tested serialization solutions side by side, a custom JSON serializer I coded by hand and JSON.net. My test was to just through the same data set at both solutions in a test harness and record and compare timing. I was mindful of using some semblance of scientific method, but my test environment runs on my local dev box that has a lot going on, so the results are not gospel and can vary depending on what’s running in the background.

After running my tests and analyzing the results and my limited research I chose to use the custom serializer. Although JSON.net is an awesome framework, the custom was a better fit for this iteration and here are my observations and reasoning why I went this direction:

  • The custom was more than an order of magnitude faster in my unscientific test. With JSON.net there is an additional step to create an IEnumerable<dynamic> to facilitate serialization so we probably went from O(1) to O(2), but I’m not sure without seeing the JSON.net internals. There may also be optimization in my JSON.net usage that could make it faster, but I had to do this project quick and simple. Without going into the gory details here’s the results of average time over 100 iterations of serializing the test data:
    • Custom Time:    00:00:00.0004153
    • JSON.net Time: 00:00:00.2529317
      This difference in time remained near the same in multiple runs of the test.
  • JSON.net output is not formatted. It’s all one line and defeats the human readable aspect of the project. This is probably configurable somehow, but I didn’t research.
  • I don’t need to deserialize and don’t have to worry about the complexity of deserialization. If I did, I would probably go with JSON.Net.
  • I am not sure if we are authorized to use JSON.Net in production.
  • We are serializing flat data from one table or view (no table join object mapping) and we don’t have to worry about the complexities of multi-level hierarchies or I’d choose JSON.net.

In the end even though I tied myself to a specific implementation I built in extendability through abstractions. We can later swap serializers and also build new dumps based on other tables pretty easily. I could see possibly adding a feature to import a dump file for comparison in the system instead of having to use an external tool. This could also be the basis for moving data from system to system in a way that will be much simpler than the previous project. Taking the time to look at multiple solutions presented me with areas that I should think about and prepare for extension without going overboard with abstractions.

The real point is to try more than one thing when trying to find a solution to a problem. Compare solutions to find reasons to use or not use solutions. Don’t pick the first thing that comes your way or comes to mind. Spend a little time learning something new and experimenting or you will rarely learn anything new on your own and will stay a slave of Google (no disrespect as I lean heavily on Google search). This is especially important for engineers dealing with enterprise legacy applications. Don’t let yourself get outdated like a 10 year old broken legacy application. Stay “So Fresh, So Clean”.

SpecFlow Tagging

It seems like SpecFlow tagging has been a theme in many of my recent posts. Its a very powerful concept and central to how I control test execution. Hopefully, this will give someone some inspiration. When I learned about tagging everything else seemed to click in regards to my understanding of the SpecFlow framework. Tagging was a bridge for me.

A Tag in SpecFlow is a way to mark features and scenarios. A Tag is an ampersand, @, and the text of the tag. If you tag a feature, it will apply to all feature’s scenarios and tagging a scenario will apply to all of the scenario’s steps.

Out the box SpecFlow implements the @ignore tag. This tag will generate an ignore attribute in your unit test framework. Although this is the only out the box tag, you can create as many tags as you like and there are a lot of cool things you can do with them.

SpecFlow uses tags to generate categories or attributes that can group and control test execution in the unit test framework that is driving your tests. Tags are also used control test execution outside the unit test framework in SpecFlow’s event bindings and scoped bindings. You also have access to tags in your test code through the ScenaroContext.Current.ScenarioInfo.Tags property.

Another benefit of tagging is that the tags can be targeted from the command line. I can run my tests from the command line and indicate what tags I want to run. This means I can control testing on my continuous integration server.

As you can see, tags are very powerful indeed in shaping the execution of your tests. Below I will explain how I am currently using tags in a standardized way in my test framework. My tagging scheme is still a work in progress and I am honing in on the proper balance of tags that will allow good control of the tests without creating accidental complexity and a maintenance nightmare.

In the feature file for UI tests I have the following tags:

  • Test Type – in our environment we run Unit, Integration, Smoke, and Functional tests, in order of size and speed of the tests. A feature definition should only include one test type tag, but there can be situations where a Functional could include lower tests, but no other test types should be mixed. So you could have @Smoke and @Functional, but not @Smoke and @Unit.
  • Namespace – each C# namespace segment is a tag. For example, if I have a namespace of Foo.Bar then my tags would be @Foo @Bar
  • SUT – the system under test is the class or name of the page or control
  • Ticket Number – the ticket number that the feature was created or changed on (e.g. @OFT11294). We prefix the number to better identify this tag as a ticket.
  • Requirement – this is a reference to the feature section in the business requirements document that the feature spec is based on (e.g. @Rq4.4.1.5). We prefix the number to better identify this tag as a requirement.

With the above examples our feature definition would look like this:

@Integration @Foo @Bar @OFT11294 @Rq4.4.1
Feature: Awesome  Thing-a-ma-gig

This allows me to target the running of tests for different namespaces, test types, SUT, ticket numbers, requirements or any combination of them. When a developer deploys changes for a ticket, I can just run the tickets that target the ticket. This is huge in decreasing the feedback cycle for tests. Instead of having to run all the tests, and this could be hours, we can run a subset and get a quicker response on the outcome of the tests.

At the scenario level we want to tag the system under test (SUT). This allows us to run test for a particular  SUT, but it also gives us the flexibility of hooking behavior into our test runs. Say I want to instantiate a specific class for each scenario, if I did a BeforeFeature Hook with no tagging it would apply to every scenario in the test assembly because SpecFlow Hooks are global. With tagging, it will run for scenarios with matching tags.

…Feature File

@Integration @Foo @Bar @OFT11294 @Rq4.4.1
Feature: Awesome  Thing-a-ma-gig

@Thingamagig @Rq4.4.1.5
Scenario: Awesome Thing-a-ma-gig works

…Step Class

[BeforeScenario(“Thingamagig“)]
public static void ScenarioSetup()
{
sut = new Thingamagig();
}

We have the @Ignore tag that we can apply to features and scenarios to signal to the test runner to not run the tagged item. There is also a @Manual tag that functions like the @Ignore tag for features and scenarios that have to be ran manually. I did some custom logic to filter the @Manual tag, but you can find a simple way to do it on this short post on SpecFlow Manual Testing.

In my test framework I have fine grained control of test execution through a little helper class I created. I won’t bore you with all of the code, but basically I use a scoped BeforeFeature binding to call a bit of code that decides if the feature or scenario should be ran or not. Yes this kind of duplicates what SpecFlow and the unit test framework already does, but I am a control freak. This code is dependent on SpecFlow and NUnit.Framework.

if (IgnoreFeature(FeatureContext.Current.FeatureInfo.Tags))
{
Assert.Ignore();
return;
}

The IgnoreFeature() method will get the tags to run or ignore from a configuration file. If the tag in FeatureContext.Current.FeatureInfo.Tags matches an ignore tag from configuration it will return true. If the tag matches a run tag it will return false. We also include matching to ignore the @Ignore and @Manual, even though there is already support for @Ignore. This same concept applies to scenarios and ScenarioContext.Current.ScenarioInfo.Tags that are evaluated in a global BeforeScenario binding. In the example above I am using Assert.Ignore() to ignore the test. As you probably know Ignore in unit test frameworks is usually just an exception it throws to immediately fail the test. In my actual test framework, I replace Assert.Ignore() with my own custom exception that I throw that will allow the ignored tags to be logged.

With this method of tagged based ignoring using a configuration file, we could add a tag for environment to control test execution by environment. I say this because I have seen many questions about controlling tests by environment. The point is, there are many ways to use tags and the way I use them is just one way. You can tag how you want and there are some great posts out there to give you inspiration on how to use them for your situation.

Pros and Cons of my SpecFlow Tagging Implementation

Pros:

  • Fine grained control of test execution.
  • Controlling tests through configuration and command line execution.
  • Tractability of tests to business requirements and work tickets.

Cons:

  • Tags have no IntelliSence in VisualStudio.
  • Tags are static strings so spelling is an issue.
  • When a namespace or SUT name changes we have to remember to change the name in the feature and step files.
  • Tags can get messy and complicating, especially when test covers multiple tickets or features.

Ref:

Given the Keys to Merge

Today we were told that the Dev on Production Support will be responsible for merging branches. Branching here is a little different than I’m used to. Usually, there is a trunk, main, or master branch that acts as the root that release or feature branches are branched off of. Here release branching is not done from trunk, but a previous branch. So, if we are currently working on release 5.0 and need to start on release 5.1, an new branch will be created off of the 5.0 branch. If there is concurrent development in both branches then the new 5.1 branch needs to be kept in synch with the 5.0 branch, so we merge the 5.0 changes to the 5.1 branch. I am not sure how or even if we merge everything back to trunk or even if we use trunk.

With this scheme when there is a stockpile of changes it is difficult to reconcile all of the potential merge conflicts. If there was a change to the same file of the branches being merged, the merge can get a little hairy trying to reconcile everything. It was decided to merge more often so you don’t have to face a mountain of merge issues. The Production Support Dev will merge branches preferably daily. Since we all rotate Production Support duties, that means the entire team has the keys to merges.

My little brain always has questions and my thought was why don’t we just merge when we have a change complete. If you make a change in a branch, you should issue a merge request to the team (ala Git, even thought we are an SVN shop). A merge request is simply a message to the team asking them to code review the change. If the change passes coded review, the Dev can merge changes related branches. Well it seems that merging is time consuming in our environment. I haven’t done it yet, but our tech lead said that we could merge our own changes if we have time. I assume that this means that under the pressure to get the release complete there is usually no time to merge. I will try it myself and record the time, if I have time :). Although, my production code dev days are limited, so I won’t have a chance to get a lot of opportunities to put it to the test.

Main Line Development

At my previous employer we did main line development. This basically means we developed directly in main (i.e. trunk). Main wasn’t stable and always had the latest changes of the next release. When we deployed a release we would cut a release tag off of the commit in main that was deployed to production. There would always be a tag that contains the currently deployed code.

Any branching outside of the tag release branching was frowned upon and were asked to limit branching (e.g. you better not branch) because merging was thought to be evil. So, we didn’t do concurrent development on multiple releases. The focus was entirely on the current release. When the release was QA validated, we’d cut the release branch and move on to the next release. This meant that as the release slowly came to a close we were sometimes left twiddling our thumbs while we waited on QA validation and release branching.

If we need to do a production hotfix, we would cut branch off of the current production release tag, make the fix, QA validate it, cut a new release tag, merge changes to main.

It was a very clean process, but I remember having a strong desire to cut feature branches so I could work on the next thing.

Feature Toggles

Which bring up Feature Toggles/Flags. Feature Toggles would allow us to get the best of both worlds. Basically, you add a bit of code to indicate if a particular change is active. In fact the active state can be set in configuration, so IT Operations could actually enable and disable new features and changes with a simple config change. With Feature Toggles we get to do main line and concurrent development at the same time. I have only heard of Feature Toggles in the context of other development stacks like Java. I wonder if there are any .Net shops out there successfully using Feature Toggles?