Tagged: code quality pipeline

Bisecting Our Code Quality Pipeline

I want to implement gated check-ins, but it will be some time before I can restructure our process and tooling to accomplish it. What I really want is to be able to keep the source tree green and when it is red provide feedback to quickly get it green again. I want to run tests on every commit and give developers feedback on their failing commits before it pollutes the source tree. Unfortunately, to run the tests as we have it today would take too long to test on every commit. I came across a quick blog post by Ayende Rahien on Bisecting RavenDB and they had a solution were they used git bisect to find the culprit that failed a test. They gave no information on how it actually worked just a tease that they are doing it. I left a comment to see if they would share some of their secret sauce behind their solution, but until I get that response I wanted to ponder it for a moment.

Git Bisect

To speed up testing and also allow test failure culprit identification with git bisect we would need a custom test runner that can identify what test to run and run them. We don’t run tests on every commit, we run tests nightly against all the commits that occurred for the day. When the test fails it can be difficult identifying the culprit(s) that failed the test. This is were the Ayende steps in with his team’s idea to use bisect to help identity the culprit. Bisect works by traversing commits. It starts at the commit we mark as the last known good commit to the last commit that was included in the failing nightly test. As bisect iterates over the commits, it pauses at each commit and allows you to test it and mark if it is good or bad. In our case we could run a test against a single commit. If it passes, tell bisect its good and to move to the next. If it fails, save the commit and failing test(s) as a culprit, tell bisect its bad and to move to the next. This will result in a list of culprit commits and their failing tests that we can use for reporting and bashing over the head of the culprit owners (just kidding…not).

Custom Test Runner

The test runner has to be intelligent enough to run all of the tests that exercise the code included in a commit. The custom test runner has to look for testable code files in the commit change log, in our case .cs files. When it finds a code file it will identify the class in the code file and find the test that targets the class. We are assuming one class per code file and one unit test class per code file class. If this convention isn’t enforced, then some tests may be missed or we have to do a more complex search. Once all of the test classes are found for the commit’s code files, we run the the tests. If a test fails, we save the test name and maybe failure results, exception, stack trace… so it can be associated with the culprit commit. Once all of the tests are ran, if any of them failed, we mark the commit as a culprit. After the test and culprit identification is complete, we tell bisect to move to the next commit. As I said before, this will result in a list of culprits and failing test info that we can use in our feedback to the developers.

Make It Faster

We could make this fancy and look for the specific methods that were changed in the commit’s code file classes. We would then only find tests that test the methods that were changed. This would make testing focused like a lazer and even faster, but we could probably employ Roslyn to handle the code analysis to make finding tests easier. I suspect tools like ContinuousTests – MightyMoose do something like this, so it’s not that far fetched an idea, but definitely a mountain of things to think about.

Conclusion

Well this is just a thought, a thesis if you will, and if it works, it will open up all kind of possibilities to improve our Code Quality Pipeline. Thanks Ayende and please think about open sourcing that bisect.ps1 PowerShell script 🙂

TestPipe Test Automation Framework Release Party

Actually, you missed the party I had with myself when I unchecked private, clicked save on GitHub, and officially release TestPipe. You didn’t miss your chance to checkout TestPipe, a little Open Source project that has the goal of making automated browser based testing more maintainable for .NET’ters. The project source code is hosted on GitHub and the binaries are hosted on NuGet:

 

 

If you would like to become a TestPipe Plumber and contribute, I’ll invite you to the next party :).

 

Results of my personal make a logo in 10 minutes challenge. Image
 
 

Architecture Validation in Visual Studio

As a part of my Quality Pipeline I want to validate my code against my architectural design. This means I don’t want invalid code integrations, like client code calling directly into data access code. With Visual Studio 2012 this is no problem. First I had to create a Modeling Project. Then I captured my architecture as a layer diagram. I won’t go over the details of how to do this, but you can find resources here

Next I added

<ValidateArchitecture>true</ValidateArchitecture>

to my model project’s .modelproj file. This instructs MSBuild to validate the architecture for each build. Since this is configured at the project level it will validate the architecture against all of the layer diagrams included in the project.

For a simpler way to add the configuration setting here is a MSDN walk through – http://msdn.microsoft.com/en-us/library/dd409395(v=vs.110).aspx#ValidateAuto

  1. In Solution Explorer, right-click the modeling project that contains the layer diagram or diagrams, and then click Properties.
  2. In the Properties window, set the modeling project’s Validate Architecture property to True.
    This includes the modeling project in the validation process.
  3. In Solution Explorer, click the layer diagram (.layerdiagram) file that you want to use for validation.
  4. In the Properties window, make sure that the diagram’s Build Action property is set to Validate.
    This includes the layer diagram in the validation process.

Adding this configuration to the project file only validates my local build. As part of my Quality Pipeline I also want to validate on Team Build (my continuous build server).  There was some guideance out there in the web and blogosphere, but for some reason my options did match what they were doing. You can try the solution on MSDN (http://msdn.microsoft.com/en-us/library/dd409395(v=vs.110).aspx#ValidateAuto). Like I said, this didn’t work for me. I had to right click the build definition in Build Explorer and click Edit Build Definition. On the Process tab, under Advanced, I added

/p:ValidateArchitecture=true

to MSBuild Arguments.

Now my code is guarded against many of the issues that result from implementations that violate the designed architecture.

.NET Code Coverage with OpenCover

I made more progress in improving my Code Quality Pipeline. I added test code coverage reporting to my build script. I am using OpenCover and ReportBuilder to generate the code coverage reports. After getting these two tools from Nuget and Binging a few tips I got this going by writing a batch script to handle the details and having NAnt run the bat in a CodeCoverage target. Here is my bat

REM This is to run OpenCover and ReportGenerator to get test coverage data.
REM OpenCover and ReportGenerator where added to the solution via NuGet.
REM Need to make this a real batch file or execute from NANT.
REM See reference, https://github.com/sawilde/opencover/wiki/Usage, http://blog.alner.net/archive/2013/08/15/code-coverage-via-opencover-and-reportgenerator.aspx
REM Bring dev tools into the PATH.
call "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\Tools\VsDevCmd.bat"
mkdir .\Reports
REM Restore packages
msbuild .\.nuget\NuGet.targets /target:RestorePackages
REM Ensure build is up to date
msbuild "MyTestSolution.sln" /target:Rebuild /property:Configuration=Release;OutDir=.\Releases\Latest\NET40\
REM Run unit tests
.\packages\OpenCover.4.5.2316\OpenCover.Console.exe -register:user -target:"C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\mstest.exe" -targetargs:"/testcontainer:.\source\tests\MytestProjectFolder\bin\Debug\MyTestProject.dll" -filter:"+[MyTestProjectNamespace]* -[MyTestProjectNamespace.*]*" -mergebyhash -output:.\Reports\projectCoverageReport.xml

REM the filter +[MyTestProjectNamespace]* includes all tested classes, -[MyTestProjectNamespace.*]* excludes items not tested
REM Generate the report
.\packages\ReportGenerator.1.9.1.0\ReportGenerator.exe -reports:".\Reports\projectCoverageReport.xml" -targetdir:".\Reports\CodeCoverage" -reporttypes:Html,HtmlSummary^ -filters:-MyTestProject*
REM Open the report - this is just for local running
start .\Reports\CodeCoverage\index.htm
pause

Issues

I have integration tests that depend on files in the local file system. These were failing because OpenCover runs the tests from a different path than the one the files are copied to during build. To overcome this I added the DepoloymentItem attribute to my test classes for all of the files I depend on for tests. This attribute will cause the files to be moved to the test run location with the DLLs with OpenCover does its thing.

[TestClass]
[DeploymentItem("YourFile.xml")] //Can also be applied to [TestMethod]
public class YourAwesomeTestClass
{

}

Another problem I had prevented the database connection strings from being read from the app.config. I was running MSTest with the /noisolation command line option. I removed the option and it worked. It seems like noisolation is there to improve performance of the test run. I don’t see much difference in timing right now and when I hit a wall in the time of my test execution I will revisit…no premature optimization for me. See

Release the Code Quality Hounds

I wanted to start using more quality tools as a part of my initiative to improve my Code Quality Pipeline. So I decided to implement Code Analysis to insure the project maintains certain coding standards. In addition to the FxCop style static analysis of the MSIL code by Visual Studio Code Analysis I decided to also use StyleCop to validate that my source code is up to standard. This would be a major undertaking in most legacy applications, but if you have been following Microsoft coding standards or you’re working in a greenfield app I would definitely recommend implementing these code quality cop bots.

Code Analysis

When you enable Code Analysis for your managed code it will analyze your managed assemblies and reports information about the assemblies, such as violations of the programming and design rules set forth in the Microsoft .NET Framework Design Guidelines.

To get Code Analysis up and running I right clicked on my project and clicked properties. Then I clicked on Code Analysis.

codeanalysis

This screen allows us to configure code analysis. What I wanted to do first is create a custom rule set. This allows me to configure and save the rules I want for code analysis. So, I selected Microsoft All Rules, clicked open, then File > Save As and saved the rule set file in my solution root folder. Then I edited the file in NotePad++ to give it a better name. Then I went back to VS and selected my custom rule set.

To enable Code Analysis on build, I checked the box for Enable Code Analysis on Build. You may want to only do this on your release target or other target that you run before certifying a build production ready. I did this on every project in the solution, including tests as I didn’t want to get lazy with my quality in tests. Having Code Analysis enabled will also cause Code Analysis to run on my Team Build Gated Check-in build as it is set to run Code Analysis as configured on the projects.

Also, I wanted Code Analysis violations treated as errors on build, so I added this to the debug Property Group

 <CodeAnalysisTreatWarningsAsErrors>true</CodeAnalysisTreatWarningsAsErrors>

Lastly, to do a quick test, I right clicked on my Solution in Solution Explorer and clicked “Run Code Analysis on Solution.” Since I had some code already written it did return some issues which I fixed. I then checked the code in to TFS and the code analysis also ran on Team Build.

StyleCop

StyleCop analyzes C# source code to enforce a set of style and consistency rules. First I downloaded and installed StyleCop.

For some reason getting StyleCop up and running wasn’t as easy as it usually is. So I am going to explain an alternate route, but bear in mind that the best route is Nuget.

Actually, I installed StyleCop from Nuget, but it didn’t configure the tooling in Visual Studio properly so I download the package from the project site, http://stylecop.codeplex.com, and reinstalled from the downloaded installer. I tried adding the Nuget package to add the StyleCop MSBuild target, but that too resulted in too many issues. So I followed the instructions in the StyleCop docs to get a custom build target installed for my projects, http://stylecop.codeplex.com/wikipage?title=Setting%20Up%20StyleCop%20MSBuild%20Integration&referringTitle=Documentation.

I decided to show StyleCop violations as errors. So, I added

<StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>

to the project files. This instructs the project to treat StyleCop violations as a build error.

<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
    <ProjectGuid>{077D5FD7-F665-4CB1-947E-8E89D47E2689}</ProjectGuid>
    <OutputType>Library</OutputType>
    <AppDesignerFolder>Properties</AppDesignerFolder>
    <RootNamespace>CharlesBryant.AppNgen.Core</RootNamespace>
    <AssemblyName>AppNgen.Core</AssemblyName>
    <TargetFrameworkVersion>v4.5</TargetFrameworkVersion>
    <FileAlignment>512</FileAlignment>
    <SccProjectName>SAK</SccProjectName>
    <SccLocalPath>SAK</SccLocalPath>
    <SccAuxPath>SAK</SccAuxPath>
    <SccProvider>SAK</SccProvider>
    <StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>
    <StyleCopTargets>..\..\..\StyleCop.Targets</StyleCopTargets>
  </PropertyGroup>

I wanted StyleCop integrated into my local and Team Build so I added the build target to the project files. The build target is installed when you install StyleCop, if you select it. You will have to copy the target from the Program Files/MSBuild folder to a folder within your solution folder so it can be checked into TFS. Then you can point to it in your project files to cause StyleCop to run on build by adding

<StyleCopTargets>..\..\..\StyleCop.Targets</StyleCopTargets>

to your PropertyGroup and

<Import Project="$(StyleCopTargets)" />

If I were you, I would stick with the StyleCop.MSBuild Nuget package to integrate StyleCop in your build. The route I took and explained above required a lot of debugging and reworking that I don’t want to blog about. I actually, tried the Nuget package in another project and it worked perfectly.

Conclusion

In the end, I have Code Analysis and StyleCop running on every build, locally or remote, and violations are treated as errors so builds are prevented when my code isn’t up to standard. What’s in your Code Quality Pipeline?

Visual Studio Architecture Validation

As a part of my Code Quality Pipeline I want to validate my code against my architectural design. This means I don’t want invalid code integrations, like client code calling directly into data access code. With Visual Studio 2012 this is no problem. First I had to create a Modeling Project. Then I captured my architecture as a layer diagram. I won’t go over the details of how to do this, but you can find resources here

http://www.dotnetcurry.com/ShowArticle.aspx?ID=848
http://msdn.microsoft.com/en-us/library/57b85fsc(v=vs.110).aspx

Next I added true to my model project’s .modelproj file. This instructs MSBuild to validate the architecture for each build. Since this is configured at the project level it will validate the architecture against all of the layer diagrams included in the project.

For a simpler way to add the configuration setting here is a MSDN walk through – http://msdn.microsoft.com/en-us/library/dd409395(v=vs.110).aspx#ValidateAuto

1. In Solution Explorer, right-click the modeling project that contains the layer diagram or diagrams, and then click Properties.
2. In the Properties window, set the modeling project’s Validate Architecture property to True.
This includes the modeling project in the validation process.
3. In Solution Explorer, click the layer diagram (.layerdiagram) file that you want to use for validation.
4. In the Properties window, make sure that the diagram’s Build Action property is set to Validate.
This includes the layer diagram in the validation process.

Adding this configuration to the project file only validates my local build. As part of my Quality Pipeline I also want to validate on Team Build (my continuous build server). There was some guideance out there in the web and blogosphere, but for some reason my options did match what they were doing. You can try the solution on MSDN (http://msdn.microsoft.com/en-us/library/dd409395(v=vs.110).aspx#ValidateAuto). Like I said, this didn’t work for me. I had to right click the build definition in Build Explorer and click Edit Build Definition. On the Process tab, under Advanced, I added /p:ValidateArchitecture=true to MSBuild Arguments.

Now my code is guarded against many of the issues that result from implementations that violate the designed architecture.