Category: Pipeline

Test Automation Tips: 1

#1 Don’t test links unless you are testing links.

Unless you are specifically testing menus or navigation links, don’t automate the clicking of links to navigate to the actual page under test. Instead take the shortest route to set the test up.

Let’s say I have a test that starts by clicking the product catalog link, then clicks on a product to get to product details, just so I can test that a coupon appears in product details. This test is testing too many concepts that are out of the scope of the intent of the test. I am trying to test the coupon visibility, not links. Instead, I should navigate directly to the product details page and make my assertion.

If you use the link method and fool yourself into believing that your test is acting like a user, when links are broken your test will fail for a reason other than the reason you are testing. If you use the same broken link in multiple tests, you will have multiple failures and will have to fix and rerun the tests to get feedback on the features you are really trying to test. If the broken tests are in a nightly test, you just lost a lot of test coverage and and you don’t know if the features are passing or failing because they never got tested. Using the link method can cause a dramatic loss of test coverage. So, test links in navigation tests only when the links are front and center in the purpose of the test. For all other tests, take the shortest route to the page under.

View more tips.

When I am writing and maintaining large functional UI tests I often realize somethings that would make my life easier. I decided to write this series of posts to describe some of the tips I have for myself in hopes that they prove to be helpful to someone else. What are some of your tips?

Test Automation Tips

What Makes a Good Candidate for Test Automation?

Writing large UI based functional tests can be expensive in terms of money and time. It is sometimes hard to know where to focus your test budget. New features are good candidates, especially the most common successful and exceptional paths through the feature. But, when you have a monster legacy application with little to no coverage, where to get the biggest bang for the buck can be hard to ascertain.

Bugs, Defects, Issues…It Doesn’t Work

I believe bugs provide a good candidate for automation, especially if regression is a problem for you. Even if regression is not an issue, its always good to protect against regressions. So, automating bugs are kind of a win-win in terms of risk assessment. Hopefully, when a bug is found whoever finds the bug or whoever adds it to the bug database provides reproduction steps. If the steps are a good candidate for automation, automate it.

Analyzing Bugs

What makes a bug a good candidate for test automation? When analyzing bugs for automated testing I like to evaluate on 4 basic criteria. In descending order of precedence:

  • The steps are easy to model in the test framework.
  • The steps are maintainable as an automated test.
  • The bug was found before.
  • The bug caused a lot of pain to users or the company.

It is just common sense that “bug caused a lot of pain” is the top candidate. If a bug caused a lot of pain, you don’t want to repeat it, unless you like pain. Yet, if the painful bug is a maintenance nightmare as an automated test, the steps are hard to model, and the bug wasn’t found before you may want to just mark it for manual regression. If your test matches 2 or more of the criteria I’d say it is a high priority candidate for test automation.

Conclusion

These are just my opinion and there is no study to prove any of it. I know this has been thought of and pondered, maybe even researched by someone. If you know where I can find some good discussions on this topic or if you want to start one, please let me know.

 

Finding Bad Build Culprits…Who Broke the Build!

I found an interesting Google Talk on finding culprits automatically in failing builds – https://www.youtube.com/watch?v=SZLuBYlq3OM. This is actually a lightening talk at GTAC 2013 given by grad students Celal Ziftci and Vivek Ramavajjala. First they gave an overview of how culprit analysis is done on build failures triggered by small test and medium sized tests.

CL or change list is a term I first heard in “How Google Tests Software” and refers to a logical grouping o changes committed to the source tree. This would be like a git feature branch.

Build and Small Tests Failures

When the build fails because of a build issue we build the CLs separately until a CL fails the build. When the failure is a small test (unit test) we do the same thing. Build CLs separately and run the tests against them to find the culprit. In both cases, we can do the analysis in parallel to speed it up. This is what I covered in my post on Bisecting Our Code Quality Pipeline where git bisect is used to recurse the CLs.

Medium Tests

Ziftci and Ramavajjala define these tests as taking less than 8 minutes to run and suggest using a binary search to find the culprit. Target the middle CL, build it and if it fails, the culprit is most likely to the left, so we recurse to the left until we find the culprit. If it passes, we recurse to the right.

CL 1 – CL 2 – CL 3 – CL 4 – CL 5 – CL6

CL 1 is the last known passing CL. CL 6 was the last CL in the failing build. We start by analyzing CL 4 and if fails, then we move left and check CL 3. If CL 3 passes, we mark CL 4 as the culprit. If CL 3 fails we mark CL 2 as the culprit because we know that CL 1 was good and don’t need to continue analyzing.

If CL 4 passed, we would move right and test CL 5 and if it fails, mark CL 5 as the culprit. If it passes, then we mark CL 6 as the culprit because it is the last suspect and we don’t have to waste resources analyzing it.

Large Tests

They defined these tests as taking longer than 8 minutes to run. This was the primary focus of Ramavajjala and Ziftci’s research. They are focusing on developing heuristics that will let a tool identify culprits by pattern matching. They explained how they have a heuristic that will analyze a CL for number of files changed and give a higher ranking to CLs with more files changed.

They also have a heuristic that calculates the distance of code in the CL from base libraries, like the distance from the core Python library for example. The closer it is to the core the more likely that it is a core piece of code that has had more rigorous evaluation because there may be many projects depending on it.

They seemed to be investing a lot of time into insuring that they can do this fast. They stress caching and optimizing how they do this. It sounds interesting and once they have had a chance to run their tool and heuristics against the massive amount of tests at Google (they both became employees of Google) hopefully they can share the heuristics that prove to be most adept at finding culprits at Google and maybe anywhere.

Thoughts

They did mention possibly using a heuristic that looks at the logs generated by build failures to identify keywords that may provide more detail on who the culprit maybe. I had a similar thought after I wrote the git bisect post.

Many times when a test fails in larger tests there are clues left behind that we would normally manually inspect to find the culprit. If the test has good messaging on their assertions, that is the first place to look. In a large end to end test there may be many places for the test to fail, but if the failure message gives a clue of what fails it helps to find the culprit. Although, they spoke of 2 hr tests and I have never seen one test that takes 2 hours so what I was thinking about and what they are dealing with may be another animal.

There is also the test itself. If the test covers a feature and I know that only one file in one CL is included in the dependencies involved in the feature test, then I have a candidate. There is also application logs and system logs. The goal as I saw it was to find a trail that led me back to a class, method, or file that matches a CL.

The problem with me trying to seriously try and solve this is I don’t have a PhD in Computer Science, actually I don’t have a degree except from the school of hard knocks. When they talked about the binary search for medium sized tests it sounded great. I kind of know what a binary search is. I have read about it and remember writing a simple one years ago, but if you ask me to articulate the benefits of using quad tree instead of binary search or to write a particular one on the spot, I will fumble. So, trying to find an automated way to analyze logs in a thorough, fast and resource friendly manner is a lot for my brain to handle. Yet, I haven’t let my shortcomings stop me yet, so I will continue to ponder the question.

We are talking about parsing and matching strings, not rocket science. This maybe a chance for me to use or learn a new language more adept at working with strings than C#.

Conclusion

At any rate I find this stuff fascinating and useful in my new role. Hopefully, I can find more on this subject.

 

Configure Remote Windows 2012 Server

Have you ever needed to inspect or configure a server and didn’t want to go through the hassle of remoting into the server? Me too. Well as I take a deeper dive into the bowels of PowerShell 4 I found a cmdlet that allows me to issue PowerShell commands on my local machine and have them run on the remote server. I know your excited, I couldn’t contain myself either. You will need PowerShell 4 and a Windows 2012 server that you have login rights to control. I am going to give you the commands to get you started and then you can Bing the rest, but its pretty simple. Once you established the connection, you just issue PowerShell commands just as if you were running them locally. Basically, you can configure your remote server from your local machine. You don’t even need to activate the GUI on the server. You can just drive it all from PowerShell and save the resources needed with the GUI.

Security

Is it secure? About as secure as you remoting into the server through a GUI. Yet, there is a difference in the vulnerabilities that you have to deal with. Security will always be an issue. This is something I will have to research more, but I do know that you can encrypt the traffic and keep the messages deep inside your DMZ.

Code

Note: Anything before the > is part of the command prompt.

PS C:\> Enter-PSSession -ComputerName server01
[server01]: PS C:\Users\CharlesBryant\Documents>

This starts the session. Notice that the command prompt now has the server name in braces and I am in my documents folder on the server.

[server01]: PS C:\Users\CharlesBryant\Documents> hostname
server01

Here I issue the host name command to make sure I’m not dreaming and I am actually on the server. Yup, this is really happening.

[server01]: PS C:\Users\CharlesBryant\Documents> Get-EventLog -list | Where-Object {$_.logdisplay name -eq "Application"}
Max(K) Retain OverflowAction Entries Log
------ ------ -------------- ------- ---
4,096 0 OverwriteAsNeeded 3,092 Application

Yes…I just queried the event log on a remote server without having to go through the remote desktop dance. BooYah! To end your session is even easier.

[server01]: PS C:\Users\CharlesBryant\Documents> Exit

Enjoy.

How Much Does Automated Test Maintenance Cost?

I saw this question on a forum and it made me pause for a second to think about it. The quick answer is it varies. The sarcastic answer is it costs as much as you spend on it, or how about, it cost as much as you didn’t spend on creating a maintainable automation project.

I have only been involved in 2 other test automation projects prior to my current position. In both I also had feature development responsibility. On one of the projects, comparing against time developing features, I spent about 10-15% of my time maintaining tests and about 25% writing them. So, that is about 30-40% of my total test time on maintenance. Based on my knowledge today, some of my past tests weren’t that good so maybe the numbers should have been higher or lower. On the other project, test maintenance was closer to 50% and that was because of poor tool choice. I can state the numbers because I tracked my time spent. I could not use these as benchmarks to estimate maintenance cost on my current project or any other unless the context was very similar and I can easily draw the comparison.

I have seen where someone might say “it’s typically between this and that percentage of development cost,” or something similar. Trying to quantify maintenance costs is hard, very hard and it depends on the context. You can try to estimate based on someone else’s guess of a rough percentage and hope it pans out, but in the end it is dependent on execution and environment. An application that changes often vs. one that rarely changes, poorly written automated tests, bad choice of automation framework, skill of the automated tester…there is a lot that can change cost from project to project. I am curious if someone has a formula to calculate an estimate across all projects, but having an insane focus on the maintainability of your automated test suites can significantly reduce costs in the long run. So a better focus, IMHO, is on getting the best test architecture, tools, framework, people and make maintainability a high priority goal. Also properly tracking maintenance in the project management or bug tracking system can provide a more valuable measure of cost across the life of a project. If you properly track maintenance cost (time), you get a benchmark that is customized for your context. Trying to calculate cost up front with nothing to base the calculations on but a wild uneducated guess can lead to a false sense of security.

So, if you are trying to plan a new automation project and you ask me about cost the answer is, “The cost of having automated tests…priceless. The cost of maintaining automated tests…I have no idea.”

Configure MSDTC with PowerShell 4.0

Continuing on the PowerShell them from my last post, I wanted to save some knowledge on working with DTC in PowerShell. I am not going to list every command, just what I’ve used recently to configure DTC. You can find more infomarion on MSDN, http://msdn.microsoft.com/en-us/library/windows/desktop/hh829474%28v=vs.85%29.aspx or TechNet, http://technet.microsoft.com/en-us/library/dn464259.aspx.

View DTC Instances

Get-DTC will print a list of DTC instances on the machine.

PS> Get-Dtc

Stop and Start DTC

Stop

PS> Stop-Dtc -DtcName Local

Stopping DTC will abort all active transactions. So, you will get asked to confirm this action unless you turn off confirmation.

PS> Stop-Dtc -DtcName Local -Confirm:$False

Start

PS> Start-Dtc -DtcName Local

Status

You could use a script to confirm that DTC is started or stopped. When you call Get-Dtc and pass it an instance name it will return a property named “Status”. This property will tell you if the DTC instance is Started or Stopped.

PS> Get-Dtc -DtcName Local

Network Settings

You can view and adjust DTC Network Settings.

View

To veiw the network setting:

PS> Get-DtcNetworkSetting -DtcName Local

-DtcName is the name of the DTC instance.

Set

To set the network settings:

PS> Set-DtcNetworkSetting -DtcName Local -AuthenticationLevel Mutual -InboundTransactionsEnabled $True -LUTransactionsEnabled $True -OutboundTransactionsEnabled $True -RemoteAdministrationAccessEnabled $False -RemoteClientAccessEnabled $True -XATransactionsEnabled $False

Here we set the name of the instance to set values for then list the property value pairs we want to set. $True/$False are PowerShell parameters that return the boolean values for true or false respectively. If you try to run this set command, you will get a message asking if you want to stop DTC. I tried first stopping DTC then running this command and it still presented the confirmation message. You can add -Confirm:$False to turn off the confirmation message.

Conclusion

There is a lot more you can do, but this fits my automation needs. The only thing I couldn’t figure out is how to set the DTC Logon Account. There maybe a magical way of finding the registry keys and setting them or something, but I couldn’t find anything on it. If you know, please share…I’ll give you a cookie.

http://www.sqlha.com/2013/03/12/how-to-properly-configure-dtc-for-clustered-instances-of-sql-server-with-windows-server-2008-r2/ – Has some nice info on DTC and DTC in a clustered SQL Server environment. He even has a PowerShell script to automate configuration…Kudos. Sadly, his script doesn’t set Logon Account.

 

Bisecting Our Code Quality Pipeline

I want to implement gated check-ins, but it will be some time before I can restructure our process and tooling to accomplish it. What I really want is to be able to keep the source tree green and when it is red provide feedback to quickly get it green again. I want to run tests on every commit and give developers feedback on their failing commits before it pollutes the source tree. Unfortunately, to run the tests as we have it today would take too long to test on every commit. I came across a quick blog post by Ayende Rahien on Bisecting RavenDB and they had a solution were they used git bisect to find the culprit that failed a test. They gave no information on how it actually worked just a tease that they are doing it. I left a comment to see if they would share some of their secret sauce behind their solution, but until I get that response I wanted to ponder it for a moment.

Git Bisect

To speed up testing and also allow test failure culprit identification with git bisect we would need a custom test runner that can identify what test to run and run them. We don’t run tests on every commit, we run tests nightly against all the commits that occurred for the day. When the test fails it can be difficult identifying the culprit(s) that failed the test. This is were the Ayende steps in with his team’s idea to use bisect to help identity the culprit. Bisect works by traversing commits. It starts at the commit we mark as the last known good commit to the last commit that was included in the failing nightly test. As bisect iterates over the commits, it pauses at each commit and allows you to test it and mark if it is good or bad. In our case we could run a test against a single commit. If it passes, tell bisect its good and to move to the next. If it fails, save the commit and failing test(s) as a culprit, tell bisect its bad and to move to the next. This will result in a list of culprit commits and their failing tests that we can use for reporting and bashing over the head of the culprit owners (just kidding…not).

Custom Test Runner

The test runner has to be intelligent enough to run all of the tests that exercise the code included in a commit. The custom test runner has to look for testable code files in the commit change log, in our case .cs files. When it finds a code file it will identify the class in the code file and find the test that targets the class. We are assuming one class per code file and one unit test class per code file class. If this convention isn’t enforced, then some tests may be missed or we have to do a more complex search. Once all of the test classes are found for the commit’s code files, we run the the tests. If a test fails, we save the test name and maybe failure results, exception, stack trace… so it can be associated with the culprit commit. Once all of the tests are ran, if any of them failed, we mark the commit as a culprit. After the test and culprit identification is complete, we tell bisect to move to the next commit. As I said before, this will result in a list of culprits and failing test info that we can use in our feedback to the developers.

Make It Faster

We could make this fancy and look for the specific methods that were changed in the commit’s code file classes. We would then only find tests that test the methods that were changed. This would make testing focused like a lazer and even faster, but we could probably employ Roslyn to handle the code analysis to make finding tests easier. I suspect tools like ContinuousTests – MightyMoose do something like this, so it’s not that far fetched an idea, but definitely a mountain of things to think about.

Conclusion

Well this is just a thought, a thesis if you will, and if it works, it will open up all kind of possibilities to improve our Code Quality Pipeline. Thanks Ayende and please think about open sourcing that bisect.ps1 PowerShell script 🙂

Working with the Windows Registry with Powershell 4.0

I figured I would rehash some of the learning I did on working with the registry with PowerShell. Most of my research on this topic was on a couple technet pages:

There is nothing really new here, just trying to commit what I learned at technet to my brain.

WARNING: Editing your registry is dangerous. Make sure you know what your doing, document your changes, and have a backup so you can revert when you mess up.

The first interesting tidbit I learned was that PowerShell looks at the registry like it is a drive and working with the registry is similar to working with files and folders. The big difference is all of the keys are treated like folders and the registry entries and values are properties on the key. So, there is no concept of a file when working with the registry.

Viewing Registry Keys

Just like working with the file system in PowerShell, we can use the powerful Get-ChildItem command.

PS> Get-ChildItem -Path hkcu:\

Interesting right? hkcu is the HKEY_CURRENT_USER registry and its treated like a drive with all of its keys as folders under the drive. Actually, hkcu is a PowerShell drive.

PowerShell Drives

PowerShell creates a data store for PowerShell drives and this allows you to work with the registry like you do the file system.

If you want to view a list of the PowerShell drives in your session run the Get-PSDrive command.

PS> Get-PSDrive

Did you notice the other drives like Variable and Env? Can you think of a reason to use the Env drive to get access to Path or other variables?

Since we are working with a drive we can achieve the same results that we did with Get-Children with basic command line syntax.

PS> cd hkcu:\
PS> dir

Path Aliases

We can also represent the path with the registry provider name followed by “::” registry. The registry provider name is Microsoft.Powershell.Core\Registry and can be shortened to Registry. The previous example can be written as:

PS> Get-ChildItem -Path Microsoft.Powershell.Core\Registry::HKEY_CURRENT_USER
PS> Get-ChildItem -Path Registry::HKCU

The first syntax is much easier, but having the provider is more verbose and explicit in what is happening (less comments in the code to explain what’s happening).

More Get-ChildItem Goodies

The examples above only list the top level keys under the path. If you want to list all keys you can use the -Recurse parameter, but if you do this on a path with many keys you will be in for a long wait.

PS> Get-ChildItem -Path Registry::HKCU -Recurse

We can use the Set-Location command to set the location of the registry. With a location set we can use “.” in the path to refer to the current location and “..” for the parent folder.

PS> Set-Location -Path Registry::HKCU\Environment
PS> Get-ChildItem -Path .
PS> Get-ChildItem -Path ..\Keyboard Layout

Above, we set the location to the Environment key, then we get the items for the key using only “.” as the path, then we get the items in another key using the “..” to represent the parent key and indicating the key under the parent we want to get items for.

When using Get-ChildItem on the registry we have its parameters at our disposal, like Path, Filter, Include, and Exclude. Since these parameter only work against names we have to use more powerful cmdlet’s to get more meaningful filtering done. In the example provide on technet, we are able to get all keys under HKCU:\Software with no more than one subkey and exactly four values:

 PS> Get-ChildItem -Path HKCU:\Software -Recurse | Where-Object -FilterScript {
     ($_.SubKeyCount -le 1) -and ($_.ValueCount -eq 4) 
}

Working with Registry Keys

As we saw registry keys are PowerShell items. So, we can use other PowerShell item commands. Keep in mind that you can represent the paths in any of the ways that we already covered.

Copy Keys

Copy a key to another location.

PS> Copy-Item -Path Registry::HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion -Destination Registry::HKCU -Recurse

This copies the keys in the Path to the Destination. Since we added Recurse all of the keys, not just the top level, will be copied.

Creating Keys

Create a new key.

PS> New-Item -Path Registry::HKCU\_DeleteMe

Deleting Keys

Delete a key.

PS> Remove-Item -Path Registry::HKCU\_DeleteMe\* -Recurse

This will remove all items under _DeleteMe. \* is telling PowerShell to delete the items, but keep the container. If we didn’t use \* the container, _DeleteMe, would be removed too. -Recurse will remove all items in the container, not just the top level items. If we attempted to remove without adding the -Recurse parameter and the item has child items we would get a warning that we are about to remove the item and all of its children. -Recurse hides that message.

Working with Registry Entries

Working with registry keys is simple because we get to use the knowledge we know about working with the file system in PowerShell. One problem is that registry entries are represented as properties of registry key items. So, we have to do a little more work to deal with entries.

List Entries

The easiest way, IMHO, to view registry entries is with Get-ItemProperty.

PS>  Get-ItemProperty -Path Registry::HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion

This will list all of the properties for the key with PowerShell related keys prefixed with “PS”.

Get Single Entry

To get a single entry we use the same Get-ItemProperty and add a Name property to specify the entry we want to return.

PS> Get-ItemProperty -Path HKLM:\Software\Microsoft\Windows\CurrentVersion -Name DevicePath

This will return just the DevicePath entry along with the related PS properties.

Create New Entry

We can add a new registry key entry with the New-ItemProperty command.

PS> New-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion -Name PowerShellPath -PropertyType String -Value $PSHome

A little more complexity to this operation, but still not rocket science. We added two more properties. PropertyType signifies the type of property to create and it must be a Microsoft.Win32.RegistryValueKind (how to deal with 64bit is something I haven’t dealt with so I leave it to you for now). Value in the example uses a PowerShell value $PSHome which is the install directory of PowerShell. You can use your own values or variables for -Value.

PropertyType Value Meaning
Binary Binary data
DWord A number that is a valid UInt32
ExpandString A string that can contain environment variables that are dynamically expanded
MultiString A multiline string
String Any string value
QWord 8 bytes of binary data

Rename Entry

To rename an entry just specify the current name and the new name.

PS> Rename-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion -Name PowerShellPath -NewName PSHome -passthru

NewName is the new name for the entry. The passthru parameter is optional and it is used to display the renamed value.

Delete Entry

To delete an entry just specify the name.

PS> Remove-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion -Name PSHome

Conclusion

Is it really this easy? Why yes it is young Padawan…yes it is. You too can control the force to destroy your registry with PowerShell :).

 

Scripting Builds with C#, Yes I Said Scripting

If you haven’t partaken of the delicious goodness that is Roslyn, don’t fret it’s easy to get in on the fun. Have you heard of ScriptCS? It’s kind of what I hoped Powershell would become, scripting with C#. No stinking compiling and complex builds, no having to learn new complex syntax and functionality, just code in C# and go. This is what ScriptCS brings by way of Roslyn. I had fun just writing C# and running it, but I needed to find a practical reason to script C# in the real world.

Then it hit me. I was in the middle of writing a build script for a project and I wondered how I could do it with ScriptCS. Looking at my NAnt script I started looking for a way to port my build script to ScriptCS and failed to envision an easy way to do it. So, I ended up doing some searching and stumbled upon Nake (actually I think a co-worker may have told me about it, can’t remember) . As the author, Yevhen Bobrov, describes Nake, “Write you build automation scripts in C# without paying the angle bracket tax!” According to Yevhen, he uses the ScriptCS pre-processing engine and takes advantage of Roslyn’s syntax re-writing features to rewrite task invocations.

Enough talk, let’s code. We will start with a build in NAnt using the sample from nant.org.

<?xml version="1.0"?>
<project name="Hello World" default="build" basedir=".">
<description>The Hello World of build files.</description>
<property name="debug" value="true" overwrite="false" />
<target name="clean" description="remove all generated files">
<delete file="HelloWorld.exe" failonerror="false" />
<delete file="HelloWorld.pdb" failonerror="false" />
</target>
<target name="build" description="compiles the source code">
<csc target="exe" output="HelloWorld.exe" debug="${debug}">
<sources>
<includes name="HelloWorld.cs" />
</sources>
</csc>
</target>
</project>

We can do similar in Nake like so:

using System;
using Nake;
//The Hello World of build files.
public static string basedir = ".";
public static string configuration = "Debug";
public static string platform = "AnyCPU";
[Task] public static void Default()
{
Build();
}
//remove all generated files
[Task] public static void Clean()
{
File.Delete("HelloWorld.exe");
File.Delete("HelloWorld.pdb");
}
//compiles the source code
[Task] public static void Build()
{
Clean();
MSBuild
 .Projects(HelloWorld.csproj)
 .Property("Configuration", configuration) 
 .Property("Platform", platform)
 .Targets(new[] {"Rebuild"})
 .BuildInParallel();
}

I really like Nake, it feels like regular C# coding to me. There may be more lines, but its easily readable lines IMHO. Not to mention, I have access to the power of C# and not just features added to a scripting tool like NAnt.

After working with Nake for a little while I found another project that targets task scripting with C#, Bau. Bau’s author, Adam Ralph, tags the project with “The C# task runner.” It’s built as a scriptcs script pack and is inspired by Rake, Grunt and gulp.” He uses an interesting approach where tasks are chained together like a long fluent build train. I haven’t had the chance to actually use Bau, but I read through some of the source and documentation. Having to chain tasks together in Bau seems foreign and limiting as I am not sure how to achieve a reusable, composable design in a manner that I am accustom to. It’s probably very simple, just not readily apparent to me like it is in Nake.

Well, I’ll keep it short. It’s good to see that there are options beginning to emerge in this space. I hope the community contributes to them. Both Nake and Bau open up scripting builds to C# developers. We get to leverage what we know about C# and script with syntax and tools we are already familiar with. We get the task based nature of NAnt with the familiarity of C#. So, if you aren’t ready to take the plunge in Roslyn, how about testing the waters with C# build scripting.

Footnote: Nake hasn’t had any commits in 5 months, Yevhen lists his location as Kiev, Ukraine. Yevhen, I have been watching the news about all of the violence happening in the Ukraine and if you are there I hope that you are OK and thanks for Nake.