Haunted by the ASP.NET White Screen of Death

You ever have a bug that you just can’t put your finger on. Well this incomplete post is a tale of such a bug. I never had a chance to finish the post or the bug hunt. The post stares at me every time I log into my blog, haunting me. So, I had to post it to release it from its torment.

Early one evening I get the “White Screen of Death” in an ASP.NET application. Basically, it is an empty web page, just empty HTML tag when it should be a full blown data driven web page. You scared yet? This isn’t the first time I have seen this in this application. I am pretty sure it is related to other errors as I remember seeing errors in the application log when ever I see the screen in certain scenarios. The problem is, on this evening it happened in a scenarios without logged errors, but that doesn’t mean there are no errors…right?

Next to see if the error may have been captured elsewhere I set off to check the server logs. I check the server event logs for issues. Nothing jumps out at me.  Then I want to check the IIS logs for the page request so I need to turn on failed request logging on the server. I haven’t done this in a while so I Binged it and got a good hit on this post,  http://www.iis.net/configreference/system.applicationhost/sites/sitedefaults/tracefailedrequestslogging. Now I have it installed and configured, I just don’t know how to inspect the resulting traces. Another Bing and I found the answer at http://www.trainsignal.com/blog/iis-7-troubleshooting. OK, tracing is working and I can view the trace files, now I can’t reproduce the error, oh the horror… figures 😦

Finally, I found a failing scenario and I get nothing of any value in the trace. So I run the scenario again, I check the server event logs again, you guessed it, nothing jumps out at me. So far nothing, I don’t see anything obvious, well nothing I’m noticing as I could just be burnt out and missing the obvious (It happens).

I do a little more Binging and the pickings are slim. I get hits on white page issues involving SSLAlwaysNegoClientCert. Seems some people were having an issue with a 413 – Request Entity Too Large error causing the white page (https://communities.bmc.com/docs/DOC-6259), but there is no way this could be my issue because I’m not uploading anything…right? There is no way ViewState is incredibly large…nah…I’ll check anyway. Better safe than sorry and maybe something will finally jump out at me.

So, I need to:

  • Check ViewState, mainly the size
  • Check Network Traffic, view what is being sent and recieved
  • See if I can repro the scenario locally so I can step through it in a debugger

And this ends our post. Sorry for the abrupt cliff hanger with no solution. I never finished the exploration on this as it was a very low priority edge case bug. It does provide some links on IIS tracing and a little insight into my thought process at the time on trying to discover the source of the bug. One thing I have always admired about many of the smart developers I work with is there thought processes and tool sets used in investigating issues. I have always felt that I came up short in my ability to quickly discover the root cause of issues so I have great respect for the software Sherlock Holmes of the world.

Well hindsight is 20/20 and the biggest mistake that I see is I didn’t automate the scenario. If I would have captured the scenario in a test, I could open it right now and continue where I left off, but now I have no idea where to even start to find the scenario that triggered this issue. So, the issue may still be lurking deep in the bowels of the application. This is the real horror in this post. Oh well, I have a lesson learned. Automate my bug hunt scenarios.

Actually, I wrote this some time ago and remember spending about a hour or so running through this drill in vain. So, in an effort not waste something that may be of value later I decided to just post it to stop it from haunting my post list.

Agile Browser Based Testing with SpecFlow

This may be a little misleading as you may think that I am going to give some sort of Scrumish methodology for browser based testing with SpecFlow. Actually, this is more about how I implemented a feature to make browser based testing more realistic in a CI build.

Browser based testing is slow, real slow. So, if you want to integrate this type of testing into your CI build process you need a way to make the tests run faster or it may add considerable time to the feedback loop given to developers. My current solution is to only run tests for the current sprint. To do this I use a mixture of SpecFlow and my own home grown test framework, TestPipe, to identify the tests I should run and ignore.

The solution I’m using at the moment centers on SpecFlow Tags. Actually, I have blogged about this before in my SpecFlow Tagging post. In this post I want to show a little more code that demonstrates how I accomplish it.

Common Scenario Setup

The first step is to use a common Scenario setup method. I add the setup as a static method to a class accessible by all Step classes.

public class CommonStep
 {
 public static void SetupScenario()
 {
 try
 {
 Runner.SetupScenario();
 }
 catch (CharlesBryant.TestPipe.Exceptions.IgnoreException ex)
 {
 Assert.Ignore(ex.Message);
 }
 }

TestPipe Runner Scenario Setup

The method calls the TestPipe Runner method SetupScenario that handles the Tag processing. If SetupScenario determines that the scenario should be ignored, it will throw the exception that is caught. We handle the exception by Asserting the the test ignored with the test frameworks ignore method (in this case NUnit). We also pass the ignore method the exception method as there are a few reasons why a test may be ignored and we want the reason included in our reporting.

SetupScenario includes this bit of code

if (IgnoreScenario(tags))
 {
 throw new IgnoreException();
 }

Configuring Tests to Run

This is similar to what I blogged about in the SpecFlow Tagging post, but I added a custom exception. Below are the interesting methods for the call stack walked by the SetupScenario method.

public static bool IgnoreScenario(string[] tags)
 {
 if (tags == null)
 {
 return false;
 }
string runTags = GetAppConfigValue("test.scenarios");
runTags = runTags.Trim().ToLower();
return Ignore(tags, runTags);
 }

This method gets the tags that we want to run from configuration. For each sprint there is a code branch and the app.config for tests in the branch will contain the tags for the tests we want to run for the sprint on the CI build. There is also a regression branch that will run all the tests which runs weekly. All of the feature files are kept together, so being able to tag specific scenarios in a feature file to run gives the ability to run specific tests for a sprint while keeping all of the features together.

Test Selection

Here is the selection logic.

public static bool Ignore(string[] tags, string runTags)
 {
 if (string.IsNullOrWhiteSpace(runTags))
 {
 return false;
 }
//If runTags has a value the tag must match or is ignored
 if (tags == null)
 {
 throw new IgnoreException("Ignored tags is null.");
 }
 if (tags.Contains("ignore", StringComparer.InvariantCultureIgnoreCase))
 {
 throw new IgnoreException("Ignored");
 }
if (tags.Contains("manual", StringComparer.InvariantCultureIgnoreCase))
 {
 throw new IgnoreException("Manual");
 }
if (runTags == "all" || runTags == "all,all")
 {
 return false;
 }
if (tags.Contains(runTags, StringComparer.InvariantCultureIgnoreCase))
 {
 return false;
 }
return true;
 }

This provides the meat of the solution where most of the logic for the solution is. As you can see the exceptions contain messages for when a test is explicitly ignored with the Ignore or Manual tag. The manual tag identifies features that are defined, but can’t be automated. This way we still have a formal definition that can guide our manual testing.

The variable runTags holds the value retrieved from configuration. If the config defines “all” or “all,all”, we run all the tests that aren’t explicitly ignored. The “all,all” is a special case when ignoring test at the Feature level, but this post is about Scenario level ignoring.

The final test is to compare the tags to the runTags config. If the tags include the runTags, we run the test. Any tests that don’t match are ignored. For scenarios this only works for one runTag. Maybe we name the tag after the sprint, a sprint ticket, or whatever it is it has to be unique for the sprint. I like the idea of tagging with a ticket number as it gives tracability to tickets in the project management system.

Improvements and Changes

I have contemplated using a feature file organization similar to SpecLog. They advocate a separate folder to hold feature files for the current sprint. Then I believe, but not sure, that they tag the current sprint feature files so they can be identified and ran in isolation. The problem with this is that the current sprint features have to be merged with the current features after the sprint is complete.

Another question I have asked myself is do I want to allow some kind of test selection through command line parameters. I am not really sure yet. I will put that thought on hold for now or until a need for command line configuration makes itself evident.

Lastly, another improvement would be to allow specifying multiple runTags. We would have to then iterate the run tags and compare or come up with a performant way of doing. Performance would be an issue as this would have to run on every test and for a large project there could be thousands of tests with each test already taking having an inherent performance issue in having to run in a browser.

Conclusion

Well that’s it. Sprint testing can include browser based testing and still run significantly faster than running every test in a test suite.

C# MEF BrowserFactory for Browser Based Testing

In TestPipe, my browser based test framework, I use MEF to help abstract the concept of a browser. The reason I do this is so I am not tied to a specific browser driver framework (e.g. WebDriver, WatiN, System.Net.WebClient). This allows me to change drivers without having to touch my test code. Here’s how I do it.

Browser Interface

First I created an interface that represents a browser. I used a mixture of interfaces from WebDriver and WatiN.

namespace CharlesBryant.TestPipe.Interfaces
{
 using System;
 using System.Collections.Generic;
 using System.Collections.ObjectModel;
 using CharlesBryant.TestPipe.Browser;
 using CharlesBryant.TestPipe.Enums;
public interface IBrowser
 {
 IBrowserSearchContext BrowserSearchContext { get; }
BrowserTypeEnum BrowserType { get; }
string CurrentWindowHandle { get; }
string PageSource { get; }
string Title { get; }
string Url { get; }
ReadOnlyCollection WindowHandles { get; }
IElement ActiveElement();
void Close();
void DeleteAllCookies();
void DeleteCookieNamed(string name);
Dictionary<string, string> GetAllCookies();
bool HasUrl(string pageUrl);
void LoadBrowser(BrowserTypeEnum browser, BrowserConfiguration configuration = null);
void Open(string url, uint timeoutInSeconds = 0);
void Quit();
void Refresh();
void SendBrowserKeys(string keys);
void TakeScreenshot(string screenshotPath);
void AddCookie(string key, string value, string path = "/", string domain = null, DateTime? expiry = null);
}
}

Pretty basic stuff although BrowserSearchContext took some thought to get it working. Basically, this abstraction provides the facility to search for elements. A lot of the concepts here are borrowed from WebDriver and WaitN and are just a way to be able to wrap there functionality and use it without being directly dependent on them. To use this you have to change your tests from directly using a browser driver to using this abstraction. At the start of your tests you use the BrowserFactory to get the specific implementation of this interface that you want to test with.

Browser Factory

Then I created a BrowserFactory that uses MEF to load browsers that implement the browser interface. When I need to use a browser I call Create in the BrowserFactory to get the browser driver I want to test with. To make this happen I have to actually create wrappers around the browser drivers I want available. One caveat about MEF is that it needs to be able to find your extensions so you have to tell it where to find them. To make the browsers available to the factory I added an MEF class attribute, [Export(typeof(IBrowser))] to my browser implementations. Then I add a post build event to the browser implementation projects to copy their DLL to a central folder:

copy $(TargetPath) $(SolutionDir)\Plugins\Browsers\$(TargetFileName)

Then I added a appConfig key with a value that points to this directory to the config of my clients that use the BrowserFactory. Now I can reference this config value to tell MEF where to load browsers from. Below is sort of how I use the factory with MEF.

namespace CharlesBryant.TestPipe.Browser
{
 using System;
 using System.ComponentModel.Composition;
 using System.ComponentModel.Composition.Hosting;
 using System.Configuration;
 using System.IO;
 using System.Reflection;
 using CharlesBryant.TestPipe.Enums;
 using CharlesBryant.TestPipe.Interfaces; 

 public class BrowserFactory
 {
 [Import(typeof(IBrowser))]
 private IBrowser browser; 

 public static IBrowser Create(BrowserTypeEnum browserType)
 {
 BrowserFactory factory = new BrowserFactory();
 return factory.Compose(browserType);
 } 

 private IBrowser Compose(BrowserTypeEnum browserType)
 {
 this.browser = null; 

 try
 {
 AggregateCatalog aggregateCatalogue = new AggregateCatalog();
 aggregateCatalogue.Catalogs.Add(new DirectoryCatalog(ConfigurationManager.AppSettings["browser.plugins"])); 
 CompositionContainer container = new CompositionContainer(aggregateCatalogue);
 container.ComposeParts(this);
 }
 catch (FileNotFoundException)
 {
 //Log
 }
 catch (CompositionException)
 {
 //Log;
 } 

 this.browser.LoadBrowser(browserType);
 return this.browser;
 }

namespace CharlesBryant.TestPipe.Enums
{
public enum BrowserTypeEnum
 {
 None,
 IE,
 Chrome,
 FireFox,
 Safari,
 Headless,
 Remote,
 Other
 }
}
 }
}

Conclusion

Well that’s the gist of it. I have untethered my tests from browser driver frameworks. This is not fully tested across a broad range of scenarios so there may be issues, but so far its doing OK for me.

The examples above are not production code, use at your own risk.

Caching in on .Net

Just a quick post about a new feature I wasn’t aware of. If you need to cache objects in .Net, before you run off and write a custom cache abstraction to divorce your code from System.Web or to scale out your cache, check out the abstract class System.Runtime.Caching.ObjectCache. It’s based on a mature cache model and is part of the .Net framework. I haven’t researched it, but there are probably implementations for you favorite distribute cache platforms already and if there isn’t one I bet its not too difficult to roll your own with ObjectCache as the base.

Also, if you need a simple in-memory cache without the System.Web headache, try System.Runtime.Caching.MemoryCache. MemoryCache.Default gives you an in-memory cache similar to the cache in ASP.NET, but for use in any .Net project without the System.Web dependency. It also provides ways to override and modify cache behavior.

So, you get a default cache with mature implementation of caching with the ability to add your own customizations, what more could you ask for. How about a thread safe dictionary? I stumbled upon MemoryCache as I was researching new ways to make a thread safe dictionary. Well I guess I will go back to learning about System.Collections.Concurrent.ConcurrentDictionary<TKey, TValue>. Wish me luck and if you have any advice, please let me know.

My Best Practices for Functional Testing

I am not a big fan of best practices because they have proliferated to the point that it’s hard to trust that some arbitrary blog espousing best practices has really put in the time and has the experience behind the practices to qualify them as best. So, I qualify this post with “MY”. These are practices that I am using right now that have proven to work for me across multiple projects. I am by no means a Functional Testing guru. I have been a developer for many years, but just started functional testing full time last month. Much of this has roots in other “best practices” so there is really nothing new, just developer common sense. This is just a way for me to start to catalog my practices for future reference and to share with the community.

  1. Coherent – scenarios should assert a single concept. Unlike unit testing I believe it is OK to make multiple assertions in a functional test because rerunning the same 10 second process to make discrete assertions across the state of a page is a waste of time. The multiple asserts should each include some type of message so you know which one failed. Even though I advocate making multiple assertions the assertions should be related. You should not assert that your button click worked because you landed on the correct page, then assert that the landing page has the correct content and clicking a link on the landing page sent you to the home page. This is an example of asserting multiple concepts and this type of multiple assertion is a no-no. In the example, asserting the button click worked, asserting the page has the correct content, and asserting that the link worked are all different concepts that express distinct concerns that should be asserted in isolation. This test should have only been an assertion that the button clicked worked and sent you to the correct page. 
  2. Light – keep your scenario definitions light on details and heavy on business value. Do try to define a script that a QA tester can follow in there testing, but express only the details necessary to convey the concerns that address the business value of the feature. The other asserts should have been in other tests. If you are defining a business process for making a payment on a website you don’t have to state every step taken to get to the payment page or every mouse click and keystroke taken to enter, submit and verify the payment. Pull out the steps the can be implied. Have your scenarios read more like a story for business people and not a script for QA and developers. Even if you don’t have business people reading the features and scenarios, you will find that they become a lot easier to maintain because they aren’t tied to details that can change wildly in new feature development.
  3. Independent – scenarios should not rely on the results of any other scenario. Likewise you should insure your scenarios are not influenced by the results of other scenarios. I learned the term “Flaky Test” by watching a couple videos by the Google test team. Flaky tests are tests that sometimes pass and sometimes fail even though the test input and steps don’t change. Many times this is because of side effects produced by previously ran scenarios. A developer way of expressing this would be given a test that is ran with the same input should produce the same result on each test run. The test should be idempotent.
  4. Focused – in your test action or “When” step, in Gherkin, you should only trigger one event in your domain. If you are triggering multiple actions, from across multiple contexts it becomes difficult to know what is being tested. Many times when I see tests with multiple “when” steps they are actually mixing additional “given” or setup steps with the actual action of the test. You could say that this is just semantics and I’m being a Gherkin snob protecting the sanctity of the Gherkin step types, but to me it just keeps scenarios simple when you know exactly what is being tested. If you feel like the multiple “when” steps are valid combine them into one step and express the multiple actions in the code behind the scenario. Keep your scenario definition focused on testing one thing.
  5. Fast – be mindful of the performance of your scenarios. Even though you may be writing slow functional tests you should not add to the slowness by writing slow code to implement your scenarios. I would take this further and say that you should write test code with the same care and engineering discipline that is used to write production code.
  6. Simple – try not to expose complexities in your test steps. Wrap your complexities. This is similar to keeping the test focused, but extends to the entire scenario and test code not just the action step in your scenario definition. This is both a feature analysis and test development principle. Think about the Page Object Model. It hides complexities of page interactions and improves maintainability of tests while making your test code and scenario definitions simple. Don’t include a lot of complex details in your scenarios or they will be bound to the details and when the details change you will have to change the scenario, the step code and probably more to make the change.

Yes, that spells CLIFFS. I wanted to join the acronym bandwagon. This would be a better post if it had examples or explanations, but this is a lazy post to keep my blogging going. If you disagree or want clarification, I would be glad to do a follow up on my thoughts on these practices. I am anxious to see how these stand up to a review at the end of this year.

Local WordPress Development on Windows

I decided to set up a local development server to help make my WordPress Theme development easier. Let’s just say it was a pain in the butt. I will record some of what I had trouble with. You can view the mountain of tuts and help sites that can guide you through the details. This is the site I followed, http://sixrevisions.com/tutorials/web-development-tutorials/using-xampp-for-local-wordpress-theme-development/.

I wanted to be able to work on multiple sites so I had to do some configuration. The first thing was to set up virtual directories for the sites I wanted to work with. I am using XAMPP for my server management and in this tool I stopped the Apache server. To setup up multiple sites in Apache I am using the virtual host configuration file. This file is located at [install directory]\apache\conf\extra\httpd-vhosts.conf. This took me a while to get this config right and it may not be optimal for your box, but here it is:

<VirtualHost *:80>
                ServerAdmin postmaster@yourdomain.com
#This is the path to your website files and they can be located anywhere on your drive                 
DocumentRoot " C:/inetpub/wwwroot/yourdomain.com "
                ServerName yourdomain.com
                 ServerAlias www. yourdomain.com
                ErrorLog "logs/ yourdomain.com -error.log"
                CustomLog "logs/ yourdomain.com -access.log" combined
                #I had to add this to get around 403 errors
                <Directory "C:/inetpub/wwwroot/yourdomain.com /">
Options Indexes FollowSymLinks Includes ExecCGI
                                AllowOverride All
                                Require all granted
                </Directory>
</VirtualHost>

As you probably figured out yourdomain.com can be any domain you want: dev.yourdomain.com, google.com, mysite.net…

Next, I had to tell Windows to map the domain name to the local IP. This is done in the host file. You can probably find the host file at C:\Windows\System32\drivers\etc\hosts. The file does not have an extension and you should edit it in a simple text editor like Notepad. To configure the site I just added to Apache, I added:

127.0.0.1 yourdomain.com

After this was done for each site I wanted to configure I could access each one of them and develop locally without having to mess with the live production server.

Gottcha

If you have a problem starting Apache with a message stating that port 80 is unavailable you may need to stop IIS and Web Deployment Agent. In my situation, these were the services that had port 80 tied up for me.

Good Luck!!!

A Twist on Test Structure

As you may not know, I love testing. Unit tests, integration tests, performance tests, and acceptance tests all have a prominent place in my development methodology. So, I love when I learn new tips and tricks that help simplify testing. Well Phil Haack did a post on “Structuring Unit Tests” that was quite ingenious even though he got it from a guy (Drew Miller), who got it from two other guys (Brad Wilson and James Newkirk).

The gist is to write a test class to contain tests for a specific class under test. Then have sub classes within the test class for each method of the class under test. Then Brian Rigsby took it further and showed how to reuse initialization code written in the parent class for all of the sub classes. Below is the result of structuring from Brian’s blog.

[TestClass]
 public class TitleizerTests
 {
 protected Titleizer target;

 [TestInitialize]
 public void Init()
 {
  target = new Titleizer();
 }

 [TestClass]
 public class TheTitleizerMethod : TitleizerTests
 {
  [TestMethod]
  public void ReturnsDefaultTitleForNullName()
  {
      //act
      string result = target.Titleize(null);

      //assert
      Assert.AreEqual(result, "Your name is now Phil the Foolish");
  }

  [TestMethod]
  public void AppendsTitleToName()
  {
      //act
      string result = target.Titleize("Brian");

      //assert
      Assert.AreEqual(result, "Brian the awesome hearted");
  }
 }

testresults

I like how the results are better structured as a result of this test code structure without having to repeat initialization code. The problem with this approach is that it violates Code Analysis rule CA1034: Nested types should not be visible. I know this is a test class and not production code, so maybe I am pointing out something that is not worth pointing out. The thing is I have been bitten a few times by thinking it is OK to ignore Code Analysis rules so I have to do my due diligence to insure this won’t cause issues down the road. Also IMHO, test code should be as good as production code.

So far it seems as if the main reason for the rule is to protect external callers of the publically exposed nested types. Maintainability is the common theme I can find in explanations. If you move the nested type outside of the contained type it will be a breaking change for external caller. For now, I will ignore the rule as I try this test structure out, but I am afraid…very afraid.

Setup Zurb Foundation for .Net Development

Zurb Foundation

My wife wanted a new website built for a project she is working on. Being the good developer husband I am I decided to implement the site with a responsive design. I could research and build the CSS and javaScript from scratch, but I was sure there was a project already developed to help get me started. Well I landed on a project named Foundation. Here is what they have to say about themselves,

Foundation is the most advanced responsive front-end framework in the world. You can quickly prototype and build sites or apps that work on any kind of device with Foundation, which includes layout constructs (like a fully responsive grid), elements and best practices.

Is it the most advanced? I have no idea, but it seems simple and has a decent community and is in use on some major sites. Why not Bootstrap you ask. Well some respected front end designers said Foundation is cool. I’m a developer, so I respect what the design community has to say about design. This is not to say that designers don’t like Bootstrap, from what I can tell they love it too. I just wanted to learn some of the dark arts and Foundation is closer to the metal and Bootstrap hands everything to you on a platter. I am not qualified to compare them, but you can read about some differences between the two that Felippe Nardi posted on his blog, https://medium.com/frontend-and-beyond/8b3812c7007c. Actually, his post pushed me to Foundation. Even though he said I will have to get my hands dirty to use it I think I will enjoy the control and absence of unnecessary or accidental complexity.

Install

First, in Visual Studio, I create an empty MVC project. I recently upgraded my VS 2012 to Web Tools 2013.1 for Visual Studio 2012 so I am using ASP.NET MVC 5.

Side note, this VS 2012 update includes some nice new features including the concept of round tripping so you can work with your MVC 5 projects in both VS 2012 and VS 2013 without having to change anything (sweet).

OK, I have gotten into the habit of checking NuGet before I attempt to bring new toys into my projects because it makes things so much easier. There is a NuGet to setup Foundation, Foundation5.MVC.Sass so pulling in the files I needed was a breeze. Setup on the other hand was a bear. For some reason I could not get the files to install correctly. Oh well, they downloaded to the solution package folder so I just had to manually copy them.

Manual Setup

First I created a Content folder in the root of my project. Then I opened the folder for the Foundation.Core.Sass package and copied the files from content/sass and dropped them in the Content folder in my project. This framework uses SASS. To compile the SASS files I install the Mindscape Web Workbench. This allows me to compile CSS files by just saving the scss file. So, I open and save the site.scss file to get the site.css created (you will see it directly under the .scss file.

Next, I set up the JavaScript. I open the conent folder in the Foundation.Core.Sass package and copied the Script folder to the root of my project. There are quite a few JavaScript files in there so I have to get them combined and minified to improve performance.

Bundling

To do this, I used NuGet to install Microsoft.AspNet.Web.Optimization that provides JS and CSS bundling features to ASP.NET. Next, I create a BundleConfig.cs in the App_Start folder and add my bundles. Something like this

using System.Web;
using System.Web.Optimization;

namespace ZurbFoundationDotNetBase
{
public class BundleConfig
{
public static void RegisterBundles(BundleCollection bundles)
{
bundles.Add(new StyleBundle(“~/Content/css”).Include(“~/Content/site.css”));

bundles.Add(new ScriptBundle(“~/bundles/jquery”).Include(
“~/Scripts/jquery-{version}.js”));

bundles.Add(new ScriptBundle(“~/bundles/modernizr”).Include(
“~/Scripts/modernizr-*”));

bundles.Add(new ScriptBundle(“~/bundles/foundation”).Include(
“~/Scripts/foundation/foundation.js”,
“~/Scripts/foundation/foundation.*”));
}
}
}

Next, I have to bootstrap this file by telling our Global.config to load it. I add this line under the RouteConfig line in the Global.asax.cs file

BundleConfig.RegisterBundles(System.Web.Optimization.BundleTable.Bundles);

Views and Controllers

Since the MVC 5 update for VS 2012 only allows us to create empty MVC 5 projects, so I have to manually create the controllers and views. The NuGet install for Foundation also missed my MVC files so I have to move them from the package to the project. I open the content folder in the Foundation5.MVC.Sass package and copy the Views folder to the root of our project. The package doesn’t include a _ViewStart.cshtml so I create one and point the layout to the Shared/_Foundation.cshtml file.

@{
 Layout = "~/Views/Shared/_Foundation.cshtml";
}

Next, I rename the Foundation_Index.cshtml to Index.cshtml. Then the last change in the View folder is to update the web.config so that the pages section looks something like this

<pages pageBaseType="System.Web.Mvc.WebViewPage">
 <namespaces>
 <add namespace="System.Web.Helpers" />
 <add namespace="System.Web.Mvc" />
 <add namespace="System.Web.Mvc.Ajax" />
 <add namespace="System.Web.Mvc.Html" />
 <add namespace="System.Web.Optimization" />
 <add namespace="System.Web.Routing" />
 <add namespace="System.Web.WebPages" />
 <add namespace="ZurbFoundationDotNetBase" />
 </namespaces>
 </pages>

Since there are no controllers I have to add one to manage the Home/Index view. In the Controllers folder I create an empty MVC 5 controller named HomeController.cs.

Conclusion

Well, that’s it. I was able to build and run the project and see the response index page. Now I have a base project for responsive web design using a less opinionated framework than Twitter Bootstrap. Although I will still advocate the use of Bootstrap, I believe Foundation will fit my personality and style of development a little better. If any of this interests you, I posted the solution on GitHub.

Architecture Validation in Visual Studio

As a part of my Quality Pipeline I want to validate my code against my architectural design. This means I don’t want invalid code integrations, like client code calling directly into data access code. With Visual Studio 2012 this is no problem. First I had to create a Modeling Project. Then I captured my architecture as a layer diagram. I won’t go over the details of how to do this, but you can find resources here

Next I added

<ValidateArchitecture>true</ValidateArchitecture>

to my model project’s .modelproj file. This instructs MSBuild to validate the architecture for each build. Since this is configured at the project level it will validate the architecture against all of the layer diagrams included in the project.

For a simpler way to add the configuration setting here is a MSDN walk through – http://msdn.microsoft.com/en-us/library/dd409395(v=vs.110).aspx#ValidateAuto

  1. In Solution Explorer, right-click the modeling project that contains the layer diagram or diagrams, and then click Properties.
  2. In the Properties window, set the modeling project’s Validate Architecture property to True.
    This includes the modeling project in the validation process.
  3. In Solution Explorer, click the layer diagram (.layerdiagram) file that you want to use for validation.
  4. In the Properties window, make sure that the diagram’s Build Action property is set to Validate.
    This includes the layer diagram in the validation process.

Adding this configuration to the project file only validates my local build. As part of my Quality Pipeline I also want to validate on Team Build (my continuous build server).  There was some guideance out there in the web and blogosphere, but for some reason my options did match what they were doing. You can try the solution on MSDN (http://msdn.microsoft.com/en-us/library/dd409395(v=vs.110).aspx#ValidateAuto). Like I said, this didn’t work for me. I had to right click the build definition in Build Explorer and click Edit Build Definition. On the Process tab, under Advanced, I added

/p:ValidateArchitecture=true

to MSBuild Arguments.

Now my code is guarded against many of the issues that result from implementations that violate the designed architecture.

A Case for Small Distinct Method Conserns

For me, one of the biggest reasons for breaking a method up into distinct tasks or concerns is rooted in the expressiveness of exception messages. Below is an exception message on an actual production website. I am not going to show the code, but I had to answer the question, “Where did the exception below get called in the Page_Load method?” This particular method is a monster that is over 300 lines of code with multiple points where multiple reference objects could be the reason for the null. If the method was broken down into distinct concerns, I would have a fighting chance of finding the source of the error in less than an hour. Hell, with distinct methods I can probably find the null as soon as I expand the method.

Target: Void Page_Load(System.Object, System.EventArgs)
 Type: System.NullReferenceException
 Exception Message:
 Object reference not set to an instance of an object.
Exception StackTrace: at Page_Load(Object sender, EventArgs e)
 at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e)
 at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e)
 at System.Web.UI.Control.OnLoad(EventArgs e)
 at System.Web.UI.Control.LoadRecursive()
 at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)

Do your code a favor, do the people that will be maintaining your code a favor, break up your god classes and methods into bite sized chunks of related functionality.