Page Object as Collection of Controls as Collection of Elements

I love the whole modular development ideal. Creating small discrete modular chunks of functionality instead of large monolithic globs of interdependent functionality helps to promote code reuse and increase maintainability of a code base. Well, for me, it was natural to extend this design pattern to the Page Object Model in functional browser based testing.

In ASP.NET Web Forms there is a style of web development that favors building the various parts of a page in User Controls. Your pages become a collection of User Controls that you can stitch together at runtime. So, your home page might have a main content control, slide show control, and secondary content control. Wrapping the home page could be a master page that has a masthead control, navigation control, and footer control. In each control would be the elements that make up a particular section of a web page. Anytime you need new functionality, you build a new user control and plug it into your application. When I am building my page objects for my tests I figured I would follow the same concept.

When I model a web page as a page object I start with the page wrapper that provides a shell for the other objects contained in the page. I will model the various user controls as control objects and I will add them to the page object that represents the page. This modularization also helps me to quickly and easily compose new pages as I don’t have to recreate common page parts.

The page wrapper just functions as a container control object and can provide communication between the controls and a central point to access page state. I try to keep the page wrapper light on functionality and focus on composition to provide the functionality that tests need.

I mentioned master pages and I model master pages through inheritance instead of composition. If the web page I am modeling uses a master page the page object for the page will inherit from another page object that models the master page. This is another way to cut down on duplication and while increasing maintainability.

This pattern is probably something common in the testing community, so I need to do more research on it. It is a work in progress for me as I am still not solid on how to implement the composition. Should the child objects know about their containing page object, how should I manage the WebDriver across controls, what if a User Control is added to a page multiple times, how should I model it. I am trying different solutions to these problems and more common and edge cases that get me stuck in the mud from time to time. Hopefully, I can provide some tried and true strategies for this extension of the page object pattern as I exercise it over the next year.

For now here is sort of where I am. I start with an interface to define the page object contract and a base page to define the core functionality all pages should have. From these abstractions I build up an actual page model as I described earlier by composing control objects that in turn compose element objects.

Below is some early code for my page abstractions. I won’t go into the specifics of this code, but you can get the gist of where I am headed. One thing to note is that I have abstracted the concept of Browser and Test Environment. This gives me flexibility in the usage of various Browser Automation Frameworks and the ability to easily configure tests for various test environments. Actually, I also have a base control object to model User Controls and an object that models page elements (think WebDriver element, but abstract so I can wrap any Browser Automation Framework). OK, last note is PageKey is used in my reporting module. As test results are collected I also store the page key with the results so that I have traceability and can be more expressive with analysis of the result data.

//This is not production code
public interface IPage
{
    string PageKey { get; }
    string PageUrl { get; }
    string PageVirtualUrl { get; }
    ITestEnvironment TestEnvironment { get; }
    string Title { get; }
    bool HasTitle();
    bool HasUrl();
    bool IsOpen();
    void Open();
}

public class BasePage : IPage
{
    public BasePage(string pageKey, Browser browser, ITestEnvironment environment)
    {
        this.Initialize();
        this.PageKey = pageKey;
        this.Browser = browser;
        this.TestEnvironment = environment;
    }
    private BasePage()
    {
    }
    public Browser Browser { get; protected set; }
    public string PageKey { get; protected set; }
    public string PageUrl
    {
        get
        {
            return this.GetPageUrl();
        }
    }
    public string PageVirtualUrl { get; protected set; }
    public ITestEnvironment TestEnvironment { get; protected set; }
    public string Title { get; protected set; }
    public virtual bool HasTitle()
    {
        if (this.Title == this.Browser.Title)
        {
            return true;
        }
        return false;
    }
    public virtual bool HasUrl()
    {
        if (!string.IsNullOrEmpty(this.PageUrl))
        {
            if (this.Browser.HasUrl(this.PageUrl))
            {
                return true;
            }
        }
        return false;
    }
    public virtual bool IsOpen()
    {
        if (!this.HasUrl())
        {
            return false;
        }
        return this.HasTitle();
    }
    public virtual void Open()
    {
        Browser.Open(this.PageUrl);
    }
    private string GetPageUrl()
    {
        if (this.TestEnvironment == null)
        {
            return string.Empty;
        }
        string baseUrl = this.TestEnvironment.BaseUrl;
        string virtualUrl = this.PageVirtualUrl;
        if (string.IsNullOrEmpty(baseUrl))
        {
            return string.Empty;
        }
        if (!baseUrl.EndsWith("/"))
        {
            baseUrl += "/";
        }
        if (virtualUrl.StartsWith("/"))
        {
            virtualUrl = virtualUrl.Substring(1);
        }
        return string.Format("{0}{1}", baseUrl, virtualUrl);
    }
 }

Have You Heard about Event Sourcing?

I cut my teeth in programming in the BASIC language on a computer that had a tape recorder as the persistent memory store (if you don’t know what a tape recorder is Google it). From there I transitioned to VBA and VBScript which wasn’t a stretch because it was all procedural, chaining a bunch of instructions to make the computer do what I want.

During my VB scripting days I was exposed to ASP and relational databases through Access then SQL Server. Cobb’s third normal form was not that much of a stretch for me to grasp. When .Net 1.0 came along as soon as it was released I jumped from ASP to ASP.NET and I took all of my procedural habits along with me to VisualBasic.NET.

Then as I barely got started with .Net I heard all the buzz around C# and object orientation and I just didn’t get it. I tried to force my procedural understanding into an OOP hole. In ASP I would create separate scripts for little pieces of functionality I wanted to reuse. I thought I was getting the same benefit of reuse and object composition that everyone was raving about with C#. How little I knew.

Today I find myself in the same boat trying to understand Event Sourcing. I am trying to fit Event Sourcing into a relational hole, but this time I won’t spend a couple years just doing it absolutely wrong. My boss asked me to talk about Event Sourcing and I took it as an opportunity to learn more about it, even though I will likely never give the talk. I did quite a bit of research and this is more of a post on where you can find some useful info.

Everyone starts with Wikipedia definitions, not sure why, but here is Event Sourcing according to Wikipedia…wait, there isn’t a Wikipedia page for it (as of 8/26/2013). Even Martin Fowler has Event Sourcing as a Work-in-Progress on his EAA page on the subject. So why the hell are we talking about it?

Event sourcing is in production on some of the most data intensive systems on the planet. People way smarter than me advocate it. Also, sometimes it’s nice to be on the cutting edge of a movement as it forces you to innovate.

Event Sourcing is a data persistence technique that focuses on capturing  the state of an application overtime. The states are captured in an event object and the objects are stored in sequential order according to the time that the state changed. Once the state is captured, it can’t be changed or undone, it is immutable. To correct a mistaken state change you have to issue a compensating state change to correct it. So, your persisted state is the gospel, if it was stored you can trust that it is true and wasn’t tampered with (outside of some malicious change to mess with you).

OK, I’m not sure about you, but when I learned this it blew my mind. The idea of persisting the entire history of the state of my application was a red or green pill moment for me. On one hand it seemed terribly inefficient to store the state of every object in my application especially since most of the discussions is about using NoSQL DB’s. How could you possibly query this data easily and what benefit does it get me? Then I learned about the ease of data recovery and production incident research and being able to replay events that happen in production last month on my local box today…what!

Then I had an epiphany. I have source control for my code, took me a little to get comfortable with it and it provided a lot of benefits for me and hopefully for you too. Event Sourcing is something a little like source control for application state. Actually, SVN is an example of Event Sourcing used in my environment at work today. So, this understanding made it a practical solution to me, but I was still unclear on real world usage and what scenarios would benefit most from Event Sourcing.

Being in the financial industry auditing is a big deal and Event Sourcing could provide an instant audit log of every transaction we record. Yet, the whole logging of every event seemed a little overkill. I won’t try to persuade you either way or actually try to explain it to you as I couldn’t do the subject justice, but I decided it was too much for my current projects. Actually, a couple videos by one of the originators of CQRS (another concept that I am researching) has a lot to offer on the subject of event sourcing. Its buried in these references, but its all related and in my opinion all fascinating. Especially, if you are into broadening your coding horizons.

CQRS/DDD by Greg Young – YouTube

http://cqrs.wordpress.com/video/

WinDbg a Real Developers Debugger

WinDbg is something I have never really used, actually I just ran through a couple demos a few years ago. I always see serious engineers using this as their debugger and I wan’t to grow up to be a serious engineer so I set out to learn WinDbg once again. Actually, if you can’t hear the trauma in my voice I am still recovering from the mother of all bugs and I am prepping myself for the next time I get a crazy issue.

I can’t really give you a reason to use WinDbg yet, but if you want a legitimate reason, you can check out this answer on Stack, http://stackoverflow.com/questions/105130/why-use-windbg-vs-the-visual-studio-vs-debugger. I have had my share of ugly debug problems and I want to know if WinDbg will give me more insight. So, I will learn now and the next time one of those debug problems rears its ugly head I will hit it with WinDbg and see what I get.

Install

Try this at your own risk and don’t attempt this at home and all that legal stuff. For me this was Difficult! (with a capital D)

First there isn’t a stand alone version that I could find. So, I had to install the Windows SDK. Then I had to find a version compatible with my environment, Windows 7 and .Net 4. Most links to the SDK redirect to the newest version. After many install, uninstall, Google, install, uninstall Google loops the correct process for my computer was this (it may be different for you).

  1. Uninstall Microsoft Visual C++ 2010 x64 Redistributable and Microsoft Visual C++ 2010 x86 Redistributable
  2. Uninstall Microsoft Visual C++ Compilers 2010 SP1Standard x64 and Microsoft Visual C++ Compilers 2010 SP1Standard x86
  3. Install the Windows 7.1 SDK – http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=8279
  4. Your installation will only be partially complete after the compiler error. Install the Microsoft Visual C++ 2010 Service Pack 1 Compiler Update for the Windows SDK 7.1 – http://www.microsoft.com/en-us/download/confirmation.aspx?id=4422.
  5. Re-run your Windows SDK web installer. Choose the first option to add features to the existing installation.
  6. Re-choose (either under redistributables or common) the desired features, including the Debugger Tools.

If you have errors, I recommend you open the log in notepad and look for the reason for failure and plug it into Google. I was able to resolve a few issues like that and I don’t care to rehash what I went through as its boring as hell and painful to talk about.

On my setup I find WinDbg here – C:\Program Files\Debugging Tools for Windows (x64)

Specify Symbol Location

As you probably know the Visual Studio debugger works with symbols to provide you with information about the source code you are debugging. WinDbg is no different and you should tell it where to find your symbols.

_NT_SYMBOL_PATH
C:\symbols; SRV*C:\symbols*http://msdl.microsoft.com/download/symbols

*Note: This is a global setting and will affect symbol loading in Visual Studio. Found out about this the hard way and got some good info on issues it can cause here, http://blogs.msdn.com/b/mahuja/archive/2008/07/08/resolving-very-slow-symbol-loading-with-vs-2008-during-debugging.aspx.

This tells WinDbg where to look for code symbols. In this example WinDbg would first check the symbols folder, then if not found it would check the CachedSymbols folder, then if it is till not found it would try to download from Microsoft symbols server and store it in the CachedSymbols folder.

Basic Commands

If you want some basic WinDbg commands and a quick start you can check out http://mtaulty.com/communityserver/blogs/mike_taultys_blog/archive/2004/08/03/4656.aspx. I am too lazy to blog about my experience with my basic WinDbg walk through. I will update the blog when I get to really flex WinDbg’s muscles.

Anyway, if you are a brave sole or one of those coding genius types and you actually try WinDbg, I hope it provides you with some extra fire power in your debug arsenal.

Generate Test Service from WSDL

I had the genius idea to create a test service implementation to test our client services. I would use the test service as a fake stand-in for the real services as the real services are managed by third parties. Imagine wanting to test your client calls to the Twitter API without having to get burned by the firewall trying to make a call to an external service.

As usual, it ends up that I am not a genius and my idea is not unique. There isn’t a lot of info on how to do it in the .Net stack, but I did find some discussions on a few Bings. I ended up using wsdl.exe to extract a service interface from the WSDL and implementing the WSDL with simple ASMX files. I won’t go into the details, but this is basically what I did:

    1. Get a copy of the WSDL and XSD from the actual service.
    2. Tell wsdl.exe where you stored these files and what you want to do, which is generate a service interface
      wsdl.exe yourFile.wsdl yourfile.xsd /l:CS /serverInterface
    3.  Then implement the interface as an ASMX (you could do WCF, but I was in a hurry)
    4. Lastly, point your client to your fake service

In your implementation, you can return whatever you are expecting in your tests. You can also capture the HTTP Messages. Actually, the main reason I wanted to do this was to figure out if my SOAP message was properly formatted and I didn’t want to go through all of the trace listener diagnostics configuration with a Fiddler port capture crap. There may be easier ways to do this and even best practices around testing service clients, but this was simple, fast, and easy, IMHO, and it actually opened up more testing opportunities for me as its all code and automatable (is that a word).

New Laptop, New Problems…Mind Your Install Order

I recently moved my local development to a new laptop. Actually, a pretty sweet laptop, solid state drive and everything. My IT department set the laptop up and IIS was not installed, but .Net and Visual Studio were. So there were a few things I had to do to get .Net properly registered in IIS. Here are some symptoms and solutions, just incase you run into these problems or I forget what I did.

SqlException

When trying to use a web app that connects to a database I would get an SqlException,

A connection was successfully established with the server, but then an error occurred during the login process. (provider: SSL Provider, error: 0 – The message received was unexpected or badly formatted.).

A simple command cleared this up.

netsh winsock reset

Actually, I got this fix from a co-worker and he was even good enough to provide some research on the possible cause of the issue, but I didn’t read any of it. I already have too much reading to do. As I understand it, it had something to do with my upgrade to Visual Studio 2013, but don’t quote me.

http://technet.microsoft.com/en-us/library/cc753591(v=WS.10).aspx

http://www.techsupportforum.com/forums/f31/netsh-int-ip-reset-and-netsh-winsock-reset-467970.html

http://support.microsoft.com/kb/299357

WCF Service Issue

When trying to run a service locally I kept getting a 405 error in my UI. Under the covers I was actually getting a 404.3, The page you are requesting cannot be served because of the extension configuration. If the page is a script, add a handler. If the file should be downloaded, add a MIME map.

You can check the reference on this issue here: http://msdn.microsoft.com/en-us/library/ms752252(v=vs.90).aspx

As it turns out, in my situation it was another artifact of the botched install order. I had to register WCF:

  1. Run Visual Studio Command Prompt as “Administrator”
  2. Change directory to C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation
  3. Run
ServiceModelReg –i

For good measure I also registered ASP.NET with IIS by running these commands in the same command prompt (you don’t have to change directory):

aspnet_regiis -I

then reset IIS

iisreset

Actually, I had to do even more. The above actions registered everything on the server, but the settings weren’t copied down to my websites and services. This is another install order issue as I installed the sites before everything was registered. There may be an easier way to fix this, but I updated my DefaultWebsite (this is were I place all my sites and services). In feature view I opened Handler Mapping and in the Actions pane I clicked Revert to Parent. Then I had to do this in my sites and services. Actually, because there are so many and I have some handy dandy automated install/uninstall scripts, I just uninstalled everything and reinstalled all the sites which picked up the configuration change.

Now that order has been restored to my world I can get back to enjoying this new laptop.

So You Want To Automate Web Based Testing

I am on my 3rd enterprise scale automated test project and figured that I should try to begin the process of distilling some of the lessons learned and best practices I have learned along the way. I am by no means an expert in automated testing, but with my new position as Automation Engineer I plan on being one. In addition to my day job, I am in the process of building an automated functional testing framework, Test Pipe. It’s part of my push to improve my Code Quality Pipeline in my personal projects. So, with all of the focus on automated testing I have right now, it’s a good time to start formalizing the stuff in my head in a way that I can share it with the development community. Don’t expect much from this first post and I am sorry if it gets incoherent. I am writing from the hip and may not have time for a heavy edit, but I have to post it because I said I would post on the 1st. As I get a better understanding of how to express my experience hopefully I will have much better posts coming up.

Test Environment

First I want to talk a little about the test environment. You should be able to build and deploy the application you want to test.This is more for Developers in Test or testers that actually write code. This is not necessary in every situation, but if you are responsible for proving that an application works as required, then you should be able to get the latest version of the app, build it, deploy it, configure it for testing, and test it. I am of the opinion that you should be able to do this manually before you actually get into the nuts and bolts of automating it. It is important to have a firm grasp on the DevOps Pipeline for your application if you are looking to validate the quality of the application’s development.

This isn’t feasible for every application, but having a local test environment is definitely worth the effort. Being able to checkout code, build and deploy it locally gives you visibility into the application that is hard to achieve on a server. Also, having a local environment affords you a more flexible work space to experiment and break stuff without affecting others on your team.

You should also have a server based test environment apart from the development and business QA environment. This could be physical or virtualized servers, but you should have a build server and the application and web servers necessary to deploy your application. Once you have your Quality Pipeline automated, you don’t want developers or QA messing around on the server invalidating your tests.

Test Configuration

Configuring the application for test is a big subject and you should give lots of thought into how best to manage the configuration. To configure the application for test you may want to create the application database, seed it with test data, update the configuration to change from using an actual SMTP server to a fake server that allows you to test email without needing the network…and more. Then there is configuring the type of test. You may want to run smoke tests every time developers check in code, a full functional test nightly, and a full regression before certifying the release as production ready. These tests can mean different things to different people, but the point is you will probably need to configure for multiple types of tests. On top of that, with web based and mobile testing you have to configure for different browsers and operating systems. Like I said configuration is a big subject that deserves a lot of thought and a strategy in order to have an effective Quality Pipeline.

Test Code

I won’t talk about the details of test code here, but a little about test code organization. Most of my test experience is in testing web applications and services on a Microsoft stack, but a well structured test code base is necessary for any development stack. I generally start with 4 basic projects in my test solutions:

  • Core – this provides the core functionality and base abstractions for the test framework.
  • Pages – this provides an interactive model of the pages and controls under test.
  • Data – this provides functionality to manage test data and data access.
  • Specs – this provides the definition and implementation of tests and utilizes the other three projects to do its job.

In reality, there are more projects to a robust testing framework, but this is what I consider my base framework and I allow the other projects to be born out of necessity. I may have a projects to provide base and extend d functionality for my browser automation framework to provide browser access to the Pages project. I may have a project to capture concepts for test design to provide better organization and workflow to the Specs project. I will have even more projects as I gain an understanding of how to test the application and my architecture will evolve as I find concepts that I foresee reusing or replacing. A typical start to a new project may look something like the layout below (this is somewhat similar to the TestPipe project I have underway to provide a ready made solution for a base test framework).

  • Core
    • IPage – contract that all pages must implement.
    • BasePage – base functionality that all pages inherit from.
    • TestHelper – various helpers for things like workflow, logging, and access to wrappers around testing tools like SpecFlow.
    • Cache – an abstraction and wrapper around a caching framework.
    • In Test Pipe, I also abstract the browser and page elements (or controls) so I am not dependent on a specific browser automation framework.
  • Pages
    • Section – I generally organize this project somewhat similar to that of the website sitemap and a section is just a grouping of pages and there would be a folder for each section. It may make more sense for you to organize this by functional grouping, but following the sitemap has worked for me so far.
      • Page – page is a model of a page or control and there is one for each page or control that needs to be tested.
  • Specs
    • Features – I have a secret love affair with SpecFlow and this folder holds all of the SpecFlow Feature files for the features I am testing.
      • Under the Features folder I may or may not have Section folders similar to the Pages project, but as I stated this could be functional groupings if it makes more sense to you.
    • Steps
      • Organizing steps is still a work in progress for me and I am still trying to find an optimal way to organize them. Steps are implementations of feature tests, but they are also sometimes abstract enough that they can be used for multiple features so I may also have a Sections folder, but also a folder for Common or Global folder for global or common steps or other units of organization.
  • Data
    • DTO – these are just plain old objects and they mirror tables in the database (ala active record).
    • TestData – this provides translation from DTO to what’s needed in the test. The actual data that is needed in tests can be defined and configured in an XML file, spreadsheet or database and the code in this folder manages seeding the test data in the database and the retrieving the data for the test scenarios based on the needs of the tests defined in the test data configuration.
    • DataAccess – this is usually a micro-ORM, PetaPoco, Massive…my current flavor is NPoco. This is just used to move data to and from the database to the DTOs. We don’t need a full blown ORM for our basic CRUD needs, but we don’t want to have to write a bunch of boilerplate code either so micro-ORMs provide a good balance.

Automated Test Framework

I use a unit test framework as my Automated Test Framework. The unit test framework is used as the test runner for the test scenarios steps and uses a browser automation framework like Selenium or Watin (.Net version of Watir) to drive the browser. The browser automation is hidden behind the Page objects, and we will discuss this later. The main take away is that unit test frameworks are not just for unit tests and I call them automated test frameworks.

I use both NUnit and MSTest as my unit test framework. To be honest, I haven’t run into a situation where MSTest was a terrible choice, but I have heard many arguments from the purist out there on why I should use something else if I want to achieve test enlightenment. I use NUnit because I use it at work and there is a strong community for it, but 9 times out of 10 I will use MSTest as my framework if there is no precedence to use another one.

Another benefit of MSTest is it doesn’t need any setup in Visual Studio as it comes out the box with Visual Studio. The integration with Visual Studio and the MS stack is great when you want something quick to build a test framework around. It just works and its one less piece of the puzzle I have to think about.

If you have a requirement that points more in the direction of another unit test framework, as long as it can run your browser automation framework, use it. Like I said I also use NUnit and actually, its not that difficult to set up. To setup NUnit I use NuGet. I generally install the NuGet package along with the downloadable MSI. The NuGet package only includes the framework so I use the MSI to get the GUI setup. With Visual Studio 2013 you also get some integration in that you can run and view results in the VS IDE so you really don’t need the GUI if you run 2013 or you install an integrated runner in lower versions of VS.

In the end, it doesn’t matter what you use for your test runner, but you should become proficient at one of them.

Browser Automation

I use Selenium WebDriver as my browser automation framework, but you could use anything that works for your environment and situation. WaitN is a very good framework and there are probably more. WebDriver is just killing it in the browser automation space right now. In the end, you just need a framework that will allow you to drive a browser from your Automated Test framework. You could roll your own crude driver that makes HTTP requests if you wanted to, but why would you? Actually, there is a reason, but let’s not go there.

Again, I am a .Net Developer, and I use NuGet to install Selenium WebDriver. Actually, there are two packages that I install, Selenium WebDriver and Selenium WebDriver Support Classes. The Support Classes provide helper classes for HTML Select elements, waiting for conditions, and Page Object creation.

Test Design

As you build tests you will discover that patterns emerge and some people go as far as building DSL (domain specific languages) around the patterns to help them define tests in a more structured and simplified manner. I use SpecFlow to provide a more structured means of defining tests. It is an implementation of Cucumber and brings Gerkihn and BDD to .Net development. I use it to generate the unit test code and stubs of the actual steps that the automation test framework code calls to run tests.

In the step stubs that SpecFlow generates for the test automation framework I call page objects that call the browser automation framework to drive the browser as a user would for the various test scenarios. Inside of the steps you will undoubtedly find patterns that you can abstract and you may even go as far as creating a DSL, but you should definitely formalize how you will design your tests.

More .Net specific info. I use NuGet to install SpecFlow also. Are you seeing a pattern here? If you are a .Net developer and you aren’t using NuGet, you are missing something special. In addition to NuGet you will need to download the SpecFlow Visual Studio integration so that you get some of the other goodies like item templates, context menus, and step generation…etc.

Build Server

I use both Team Foundation Server Build and Cruise Control. Again, the choice of platform doesn’t matter, this is just another abstraction in your Quality Pipeline that provides a service. Here the focus is on being able to checkout code from the source code repository and build it and do it all automatically without your intervention. The build server is the director of the entire process. After it builds the code it can deploy the application, kick off the tests and collect and report test results.

I use Nant with Cruise Control to provide automation of the deployment and configuration. TFS can do the same. They both have the ability to manage the running of tests, run static code analysis, and report on the quality of the build.

Conclusion

Well that’s the low down on the parts of my general test framework. Hopefully, I can keep it going and give some insight into the nitty gritty details of each piece.

Lambdas and Generic Delegates

It took me a century to figure out lambdas and what they are all about. So, as I was thinking about what to blog about next I thought I’d share about the moment that lambdas and me clicked.

When I looked at lambdas as shorthand for anonymous delegates and expressions, I came up with a phrase to help me visualize what’s going on. When I read a lambda I say, “with value of variable return expression.” A Predicate delegate would be expressed as a lambda as x => x == 1 would read, “with value of x return x is equal to 1.” When I made => read as return, I stopped trying to force it into some kind of comparison operator (if you have code dyslexia like me, you know what I’m talking about).

Another way to look at it is in terms of what’is happening under the hood. A lambda is like shorthand for anonymous delegates. In longhand, full declaration, as an anonymous delegate, the above lambda would be,

delegate(int x){return x == 1;}

Generic Delegates

Since I mentioned anonymous delegates, I should giving a passing shout out to Generic Delegates.  I use most often:

Predicate must return true or false. Can accept only one parameter.

Func returns the value specified in the parameter. Can accept 4 parameters in .net 3.5 and 16 in 4.0.

Action would not return a value. Can accept 4 parameters in .net 3.5 and 9 in 4.0.

Comparison returns a signed int indicating the result of comparison. If the returned int is

< 0 then x < y
= 0 then x = y
> 0 then x > y

The Comparison delegate can accept two objects of the same type that will be compared (x and y).

Converter returns the result of an object conversion operation. Can accept one input object (to be converted) and an output (return) object (is the converted input object).

EventHandler returns void. Accepts a System.Object (source of event) and TEventArgs (the event’s data).

Anyway, delegates are a powerful feature of C# that I am guilty of not taking advantage of like I should. Hopefully, doing this little post will help in grain them more into my solution designs. If you haven’t been exposed to or currently using Generic Delegates or Lambda Expressions, I challenge you to see how they can help solve some of your problems in a more abstract, efficient and maintainable way. Try them out you may discover a new way to think about and write code.

Free Books from Syncfusion

Have you heard about the free e-books offered by Syncfusion – http://www.syncfusion.com/resources/techportal/ebooks. Well I figured they would be some half-hearted whitepapers without any real meat. You know the fluff companies release to lure you into buying something from them. When I cracked open the first book I was pleasantly surprised by the wealth of knowledge and the quality. I mean I have paid  good money for similar books and came away with a lot less than I have learned from these books.

I haven’t had a chance to read many of them, but my favorites so far are:

Unit Testing Succinctly
Data Structures Succinctly Part 1
TypeScript Succinctly

The fact that I was able to immediately apply concepts I learned in these books at work and in my personal development is a big plus for me. Most books I have read lately have rehashed concepts I am already familiar with or are talking about new concepts that I can’t use at work. Granted  these books also rehash concepts I am familiar with, but they included nuggets and new perspectives that gave new light and understanding to those concepts.

Anyway, if you build software you owe it to yourself to check them out. You have nothing to loose.

By the way, the books actually did get me to look more into the Syncfusion product line. I won’t do a review, but if you are into cross platform development you should check out Orubase – http://www.orubase.com/. It makes a compelling case as a development platform as it enables you to use your current ASP.NET skills to build business application across iOS, Android, and Windows Phone platforms.

Lastly, if you need icons, especially Metro-style icons, check out their free Metro Studio library of 2,500 icons plus icon editor at http://www.syncfusion.com/downloads/metrostudio. The editor even makes sprites. Did I mention Metro Studio is  free too, because its free…no cost. Syncfusion is really giving back to the development community.

Release the Code Quality Hounds

I wanted to start using more quality tools as a part of my initiative to improve my Code Quality Pipeline. So I decided to implement Code Analysis to insure the project maintains certain coding standards. In addition to the FxCop style static analysis of the MSIL code by Visual Studio Code Analysis I decided to also use StyleCop to validate that my source code is up to standard. This would be a major undertaking in most legacy applications, but if you have been following Microsoft coding standards or you’re working in a greenfield app I would definitely recommend implementing these code quality cop bots.

Code Analysis

When you enable Code Analysis for your managed code it will analyze your managed assemblies and reports information about the assemblies, such as violations of the programming and design rules set forth in the Microsoft .NET Framework Design Guidelines.

To get Code Analysis up and running I right clicked on my project and clicked properties. Then I clicked on Code Analysis.

codeanalysis

This screen allows us to configure code analysis. What I wanted to do first is create a custom rule set. This allows me to configure and save the rules I want for code analysis. So, I selected Microsoft All Rules, clicked open, then File > Save As and saved the rule set file in my solution root folder. Then I edited the file in NotePad++ to give it a better name. Then I went back to VS and selected my custom rule set.

To enable Code Analysis on build, I checked the box for Enable Code Analysis on Build. You may want to only do this on your release target or other target that you run before certifying a build production ready. I did this on every project in the solution, including tests as I didn’t want to get lazy with my quality in tests. Having Code Analysis enabled will also cause Code Analysis to run on my Team Build Gated Check-in build as it is set to run Code Analysis as configured on the projects.

Also, I wanted Code Analysis violations treated as errors on build, so I added this to the debug Property Group

 <CodeAnalysisTreatWarningsAsErrors>true</CodeAnalysisTreatWarningsAsErrors>

Lastly, to do a quick test, I right clicked on my Solution in Solution Explorer and clicked “Run Code Analysis on Solution.” Since I had some code already written it did return some issues which I fixed. I then checked the code in to TFS and the code analysis also ran on Team Build.

StyleCop

StyleCop analyzes C# source code to enforce a set of style and consistency rules. First I downloaded and installed StyleCop.

For some reason getting StyleCop up and running wasn’t as easy as it usually is. So I am going to explain an alternate route, but bear in mind that the best route is Nuget.

Actually, I installed StyleCop from Nuget, but it didn’t configure the tooling in Visual Studio properly so I download the package from the project site, http://stylecop.codeplex.com, and reinstalled from the downloaded installer. I tried adding the Nuget package to add the StyleCop MSBuild target, but that too resulted in too many issues. So I followed the instructions in the StyleCop docs to get a custom build target installed for my projects, http://stylecop.codeplex.com/wikipage?title=Setting%20Up%20StyleCop%20MSBuild%20Integration&referringTitle=Documentation.

I decided to show StyleCop violations as errors. So, I added

<StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>

to the project files. This instructs the project to treat StyleCop violations as a build error.

<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
    <ProjectGuid>{077D5FD7-F665-4CB1-947E-8E89D47E2689}</ProjectGuid>
    <OutputType>Library</OutputType>
    <AppDesignerFolder>Properties</AppDesignerFolder>
    <RootNamespace>CharlesBryant.AppNgen.Core</RootNamespace>
    <AssemblyName>AppNgen.Core</AssemblyName>
    <TargetFrameworkVersion>v4.5</TargetFrameworkVersion>
    <FileAlignment>512</FileAlignment>
    <SccProjectName>SAK</SccProjectName>
    <SccLocalPath>SAK</SccLocalPath>
    <SccAuxPath>SAK</SccAuxPath>
    <SccProvider>SAK</SccProvider>
    <StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>
    <StyleCopTargets>..\..\..\StyleCop.Targets</StyleCopTargets>
  </PropertyGroup>

I wanted StyleCop integrated into my local and Team Build so I added the build target to the project files. The build target is installed when you install StyleCop, if you select it. You will have to copy the target from the Program Files/MSBuild folder to a folder within your solution folder so it can be checked into TFS. Then you can point to it in your project files to cause StyleCop to run on build by adding

<StyleCopTargets>..\..\..\StyleCop.Targets</StyleCopTargets>

to your PropertyGroup and

<Import Project="$(StyleCopTargets)" />

If I were you, I would stick with the StyleCop.MSBuild Nuget package to integrate StyleCop in your build. The route I took and explained above required a lot of debugging and reworking that I don’t want to blog about. I actually, tried the Nuget package in another project and it worked perfectly.

Conclusion

In the end, I have Code Analysis and StyleCop running on every build, locally or remote, and violations are treated as errors so builds are prevented when my code isn’t up to standard. What’s in your Code Quality Pipeline?

Visual Studio Architecture Validation

As a part of my Code Quality Pipeline I want to validate my code against my architectural design. This means I don’t want invalid code integrations, like client code calling directly into data access code. With Visual Studio 2012 this is no problem. First I had to create a Modeling Project. Then I captured my architecture as a layer diagram. I won’t go over the details of how to do this, but you can find resources here

http://www.dotnetcurry.com/ShowArticle.aspx?ID=848
http://msdn.microsoft.com/en-us/library/57b85fsc(v=vs.110).aspx

Next I added true to my model project’s .modelproj file. This instructs MSBuild to validate the architecture for each build. Since this is configured at the project level it will validate the architecture against all of the layer diagrams included in the project.

For a simpler way to add the configuration setting here is a MSDN walk through – http://msdn.microsoft.com/en-us/library/dd409395(v=vs.110).aspx#ValidateAuto

1. In Solution Explorer, right-click the modeling project that contains the layer diagram or diagrams, and then click Properties.
2. In the Properties window, set the modeling project’s Validate Architecture property to True.
This includes the modeling project in the validation process.
3. In Solution Explorer, click the layer diagram (.layerdiagram) file that you want to use for validation.
4. In the Properties window, make sure that the diagram’s Build Action property is set to Validate.
This includes the layer diagram in the validation process.

Adding this configuration to the project file only validates my local build. As part of my Quality Pipeline I also want to validate on Team Build (my continuous build server). There was some guideance out there in the web and blogosphere, but for some reason my options did match what they were doing. You can try the solution on MSDN (http://msdn.microsoft.com/en-us/library/dd409395(v=vs.110).aspx#ValidateAuto). Like I said, this didn’t work for me. I had to right click the build definition in Build Explorer and click Edit Build Definition. On the Process tab, under Advanced, I added /p:ValidateArchitecture=true to MSBuild Arguments.

Now my code is guarded against many of the issues that result from implementations that violate the designed architecture.