Category: Dev

OMG! .Net is Open Source

The world is a change’n.

From the horses mouth, http://blogs.msdn.com/b/dotnet/archive/2014/11/12/net-core-is-open-source.aspx. They also have a GitHub repo, but it currently doesn’t include everything – https://github.com/dotnet/corefx.

This is very exciting news for .Net developers as this opens up our .Net skills to be eventually applied on other platforms (Linux, MacOS, iOS, Android…). Also, getting the opportunity to browse the code base and learn from it is appealing. Hopefully, I get a chance to contribute.

Sell Products and Services on NationBuilder

I just received my certification as a NationBuilder Expert and I wanted to give something to the NationBuilder community to celebrate :). As I thought about something I could do, I remembered that I have a customer that wants to sell products in their nation. So, I will share that solution. I warn you, it is not for the technically challenged or the faint of heart.

At the moment there isn’t an easy way to sell products and services on NationBuilder websites. So, here is how I approached this problem, but the instructions may not be applicable in the future once NationBuilder makes changes to the system. Please note that charging tax and shipping is an issue with this approach, so you may want to rethink your usage of it. If you need help setting this up, leave a comment or find me in the NationBuilder Expert directory.

Overview

We will use the following page types to mimic a shopping cart experience:

  • Basic
    • Product Catalog with links to various product pages
    • Thank You page after the purchase is made that explains shipping, returns, customer service…etc.
  • Event
    • There will be an Event page for each product or service to be sold that includes detailed information about the product and a “Add to Cart” button.
  • Donation
    • Shopping Cart and the payment workflow to allow users to purchase the products they placed in the shopping cart.

Initial Configuration

We want to be able to track purchases and alert certain people about purchases so we have to configure certain items to allow this.

Tracking Code

Create a tracking code for purchases made through your shop (e.g. products).

Contact Type

Create a new Contact Type (e.g.“Purchase”). This will be used to create a custom way to alert someone to trigger product shipping or service provisioning.

Thank You Page

Create a Thank You page as a Basic Page. You can make this an empty page for now. We just need it ready when we want to point the Shopping Cart to it.

Shopping Cart

Create a new Donation page. Set the page for “one time payment” unless you want to allow people to make installment payments.

Note: Installment payments may be hard to track, but I haven’t tried. If you offer installment payments and you only ship once the final payment is made, you are effectively offering layaway. If you ship immediately and allow your customer to make installments, you are basically offering them credit. In both instances there is a lot to think about and track and I am not sure if NationBuilder in its current form would help you do either effectively.

Options

  • You should allow multiple donations and any amount would be fine as this will be overridden when someone selects a product for sale.
  • Select the tracking code you created earlier.
  • You could set a goal if you are selling items to reach some goal that you want publicly viewable. I tag donors as “shopper”, but you can call it what you want. This allows me to target people that have made purchases before.
  • I also tag donors as needing follow up so I can insure their product or service is processed and shipped or provisioned by someone.
  • Set the follow up type to the Contact Type you setup earlier. Set “Follow up by” to today to speed processing.
  • If you are actually selling a membership then you should select the membership and set the expiration date.
  • I also turn off all options on the page to limit distractions from purchase. No comments, streams…etc.
  • You should also edit your autoresponse template to change the wording to reflect the word “purchase” instead of “donation” or “contribution” (note: don’t change “donation” that appears between {{ }} brackets as those need to stay the same for the page to work).
  • You may want to update the page template to also reflect purchase instead of donation. While you are in the template you may want to add a continue shopping button that points back to your Product Catalog page.

The rest of the options are up to you. This is NationBuilder so there are many more options you can set, but this should get you started with an OK shopping cart page.

Product Page

Next, create an Event page for each production you want to sell.

Event Settings

Basic

  • If you have a set amount of inventory, you can set “Maximum Occupancy” to limit the number of purchases that can be made. You will have to adjust the template to change the message “Sorry, this event is sold out.” to something like Sorry, this item is sold out.”
  • Check “Sell Tickets”.
  • Set “Redirect to this page to purchase tickets” to the Shopping Cart (Donation) page created earlier.
  • Set “Donation tracking code” to the tracking code create earlier.
  • Nothing else should be checked besides “Don’t list this event on the website” so your product doesn’t show up in your event calendar.

Intro

  • Add a detailed description of the product and maybe a picture.

Tickets

  • Add the name and pricing for your product. Example, if you were selling a T-Shirt you could offer multiple pricing (or multiple ticket levels) based on size of the T-Shirt. I would leave “Description” blank.

View Cart

In the product catalog and product page templates, you can add a link to your shopping cart page. Alternatively, this can be added to the website template or you can add the page to the navigation. There should be a way for your visitors to get back to the shopping cart page.

<div><a href="/shopping_cart">View Cart</a>

Of course you should add a class so you can add some flair. If you add the link to the individual product page templates, as you create new product pages you may want to clone them so you don’t have to manually add the link.

Website Template

There is a message that displays when a product is added to the cart. It references tickets so I also replace that message. This is the message, “Thanks for coming! Now please complete your order, your tickets have been automatically added to the page below.”

I use a technique discussed on this post, . Basically, in your _flash.html template you override the default message that mentions tickets to one that talks about items (more generic).

Replace

{{ flash.message }}

with

{{ flash.message | replace: 'Thanks for coming! ', '' | replace: 'Now please complete your order, your tickets have been automatically added to the page below.','Thank you. Your items have been added.' }}

I had to do the nasty double replace for some reason or it wouldn’t work.

Demo

I can’t give a public demo, because I’m lazy, but below are some screen shots of an example I worked through as I wrote this post. You will have to pretend that I took the time to properly layout and style the pages :).

  Product Catalog

catalog_page

Imagine product pictures and descriptions with a view details or buy now buttons instead of just text links.

 

Product Page

product_page

Imagination is needed here too.

 

Shopping Cart

cart_page

Imagine the items laid out in a table and other niceties. Again, charging tax and shipping is an issue so you may want to rethink your usage of this. This could also use a subtotal (a little JavaScript magic may be in order). Once user is ready for purchase, they would follow the normal workflow for donation payments.

Conclusion

This is not a robust system for product and service sales, but it works. There is a lot to optimize and make better if sales is your primary on your site, you need another solution. You can’t use it to manage a large catalog of products as maintaining the products and the associated pages would be a nightmare. Every time you need to add, edit or remove a group of products you have to do it on each individual Product (Event) page. Yet, it gives you a way to offer a few products for sale on your Nation until the new feature for product sales is ready on NationBuilder.

Archer Application Template

The Archer Application Template is my opinionated implementation of a DDD‘ish, CQRS‘ish, Onion Architecture‘ish project template. I use all the ish’es because it isn’t a pure representation of any of them, but borrows concepts from my experience with all of them. I had a few questions asked about it on Twitter and at a recent conference I attended so I thought I would do a blog about it to give more information on what it is and how it can be used.

What is Archer and Why the Name?

I guess I should explain the name. My wife’s nick name is Dutchess. She likes to watch this crazy cartoon named Archer and the main character, Archer, his code name is Dutchess. So, my wife says that her code name is Archer. Well Archer kind of reminds me of architecture so it was only natural to name it Archer…right? And it only made sense to code name the first release Dutchess.

The main reason I created it was so I could have a central location (GitHub) to save the template. I use the folder structure and some of the interfaces and base classes over and over again on projects I work on. I would normally copy and paste it and use it in a new project. Having it in a central location accessible by all the computers I work on was the driver for posting it on GitHub. Actually, I didn’t really think anyone would actually look at it or attempt to use it. You can find it here, but be warned that it is an early work in progress – https://github.com/charleslbryant/ArcherAppTemplate.

In the grand scheme, my vision is provide Core and Infrastructure as a framework of sorts as they provide a pattern that can be reused across projects. I want to compile Core and Infrastructure and reuse the binaries across multiple applications. Maybe even host them on NuGet. This hasn’t been my usage pattern yet as I am still trying to clean them up and stabilize them so I can use them without worrying too much about breaking changes. When will I reach this nirvana? I don’t know because there is no development plan for this.

Right now, one of the primary benefits for me is a reusable folder structure and the interfaces and base classes that I can use to wire up an application’s general and cross cutting concerns with the application infrastructure. I split the architecture into 3 layers: Core, Infrastructure, and Application. The Core project provides interfaces, base classes, and core implementation of various generic application concerns. Infrastructure is a project that provides implementations of various infrastructure related concerns such as configuration, cache, logging and generic data access repositories. Application is an empty project where the specific application concerns are implemented using the Core and Infrastructure.

OK enough fluff, let’s dig a little deeper into the dry documentation.

Core

Cache

Cache provides centralized access to the application cache store.

ICache

This interface provides the abstraction for application cache and provides for setting, getting, and removing keys from the cache.

CacheManager

This manager allows clients to access cache through the ICache interface by injecting the implementation of cache needed by the client.

Command

Command is the C in CQRS. Commands are basically actions that clients can take in a domain.

ICommand

This interface is basically a marker to identify a class as a command.

ICommandHandler

Commands handlers are used to process commands and this interface provides the contract for all command handlers. It exposes an Execute method that is used to kick of command processing. There was a specific question in regards to validation of commands. I validate requests that are passed as commands in the UI. I don’t trust this validation so I also do validation in the command handler, which may include additional validation not included in the UI. Since commands return void, there is no way to return the result of validation through the command. So, when validation fails I will throw a custom ValidationException that includes the reason validation failed. This can be caught higher in the application stack so that messaging can be returned to the user. This may change as I am not yet 100% sure if this is how I want to implement validation. The main take away is there should be multiple points of validation and there needs to be a way to alert users to validation errors, their cause, and possibly how to correct validation issues.

ICommandDispatcher

Command dispatchers are used to route commands to the proper command handler. This interface exposes a Dispatch method that is used to trigger the command routing and execution of the command handler’s Execute method.

CommandDispatcher

This provides a default implementation of the ICommandDispatcher interface. It uses Ninject to wire up commands to command handlers.

Configuration

IConfiguration

This interface provides the abstraction for application configuration and provides for setting, getting, and removing keys from the configuration. This is extremely similar to the Cache, but configuration provides a long lived persistence of key/values pairs where cache is just a temporary short lived storage of key/value pairs.

ConfigurationManager

Similar to the CacheManager, this manager allows clients to access configuration through the IConfiguration interface by injecting the implementation of configuration needed by the client.

Entity

This is an interesting namespace. Entity is the same as the DDD entity and is basically a way to define the properties of some domain concept. I don’t have the concept of an Aggregate, although I am thinking about adding it as I appreciate the concept in DDD.

IEntity

This interface just exposes an Id property that all entities should have. Currently it is a string, but I am thinking that it should probably be a custom type because all Id’s won’t be strings, int’s, or Guid’s, but I would like to have to type safety without forcing my opinion on what an Id should be.

EntityBase

This is a basic implementation of IEntity.

NamedEntity

This is a basic implementation of IEntity that is based on EntityBase and adds a string Name property. This was added because many of my entities included a name property and I got tired of duplicating Name.

Logger

This is one of those areas that need work. I am sure that will have some breaking changes. I am not yet convinced on which implementation of logging I want to base my contracts on.

ILogger

The current interface for logging exposes one method, Log.

LogEntry

Right now this is a marker class, placeholder. I envision it holding properties that are common to all log entries.

LogManager

This manager allows clients to access logging through the ILogger interface by injecting the implementation of logging needed by the client.

Message

Messaging in the Archer domain context concerns messaging users through applications like email, IM, Twitter…and more. This is another area that needs to stablize. I have used so many different implementations of messaging and I haven’t settled on a design. Currently, Message is implemented with email in mind, but we may need to abstract it so that messages can be sent to various types of messaging application servers.

IEmail

This interface exposes one method, Send. The method accepts a Message to send and a MailServer to send it with.

Message

Message is not yet implemented, but it should contain the properties that are used in sending a message.

MailServer

MailServer is not yet implement, but it should contain the properties that are used to route Message to a MailServer.

MailManager

This manager allows clients to send mail with the IEmail interface by injecting the implementation of email needed by the client.

Query

Query is the Q in CQRS. Queries are basically requests for data that clients can ask of a domain. Queries look a lot like Commands in the structure of the files that make up the namespace, but the major difference is Commands return void and Queries return a result set.

IQuery

This interface is basically a marker to identify a class as a query.

IQueryHandler

Query handlers are used to process queries and this interface provides the contract for all query handlers. It exposes a Retrieve method that is used to kick of query processing.

IQueryDispatcher

Query dispatchers are used to route queries to the proper query handler. This interface exposes a Dispatch method that is used to trigger the query routing and execution of the query handler’s Retrieve method.

QueryDispatcher

This provides a default implementation of the IQueryDispatcher interface. It uses Ninject to wire up queries to query handlers.

Repository

IReadRepository

This is a data access repository that addresses read only concerns, it returns a result set for queries.

IWriteRepository

This is a data access repository that addresses write only concerns, like commands it does not return a result set, but it does return a bool that signifies if the write action succeeded or not. It violates CQRS in that I am exposing an update and delete method in the interface, but I wanted this to work for non-CQRS implementations, so I assume this is more CQS than CQRS.

Conclusion

I will cover infrastructure, application, and other topics in future posts.

 

Using Powershell to Export an SVN XML List to CSV

I needed to get a list of files in a specific folder of an SVN repository and export it as an csv file. The main reason was to get the size of the contents of the folder, but I also wanted to work with the results (sort, group, filter) and Excel was the tool I wanted to do it in. I will use the svn command line to get the list of files and directories and Powershell to parse, transform and output the CSV file.

PS C:\program files\tortoisesvn\bin> ([xml](svn list --xml --recursive https://myrepohost/svn/repo/branches/branch/folder)).lists.list.entry | select -property @(@{N='revision';E={$_.commit.GetAttribute('revision')}},@{N='author';E={$_.commit.author}},'size',@{N='date';E={$_.commit.date}},'name') | sort -property date | Export-Csv c:\svnlist.csv

OK, that is a mouthful, so here is a break down of what’s going on here.

[xml] – this is the Powershell XML type accelerator. It converts plain text XML into an XML document object that Powershell can work with. This can be used on any source that returns plain text XML, not just SVN list. More info, http://blogs.technet.com/b/heyscriptingguy/archive/2014/06/10/exploring-xml-document-by-using-the-xml-type-accelerator.aspx.

svn list –xml –recursive https://myrepohost/svn/repo/branches/branch/folder – this returns an xml list of files and folders from the svn path and recurse into subdirectories (http://svnbook.red-bean.com/en/1.7/svn.ref.svn.html#svn.ref.svn.sw.verbose).

.lists.list.entry – this is some XML parsing magic where we get a reference to the root “lists” node, then each “list” and each “entry” in the list. More info, http://blogs.technet.com/b/heyscriptingguy/archive/2012/03/26/use-powershell-to-parse-an-xml-file-and-sort-the-data.aspx.

The next part of the script we are sending each entry node object to our processing pipeline to produce the output. First we set the properties we want. If you want to see the XML, you could output to a file like this:

PS C:\program files\tortoisesvn\bin> ([xml](svn list --xml --recursivehttps://myrepohost/svn/repo/branches/branch/folder).Save("c:\svnlist.xml")

This simply takes the XML document created by [xml] and saves it to a file. If you view this file you would see that there is a root lists node that has a child node list, that has child node entry, which in turn has child nodes: name, size, and commit (with revision attribute and child node for author and date).

<?xml version="1.0" encoding="UTF-8"?> 
<lists> 
<list path="https://myrepohost/svn/repo/branches/branch/folder"><entry kind="file"> 
<name>somefile.cs</name> 
<size>409</size> 
<commit revision="18534"> 
<author>Charles.Bryant</author> 
<date>2010-02-09T18:08:05.647589Z</date> 
</commit> 
</entry>
...

| select -property…. – this takes each of our entry nodes and parses it to select the output we want. Example: I want the author included in my output so I will tell Powershell to include author, N=’author’ and set the value to the value of the author node from the commit node object, E={$_.commit.author}. You will notice that to get the revision I am asking Powershell to GetAttribute on the commit node. As you can see, its pretty powerful and I could reformat my output as I see fit. More info, http://technet.microsoft.com/en-us/library/dd347697.aspx.

| sort -property date – this does what it says and sorts by date, http://technet.microsoft.com/en-us/library/dd347718.aspx.

| Export-Csv c:\svnlist.csv – formats the results as csv and saves it to a file, http://technet.microsoft.com/en-us/library/ee176825.aspx.

Conclusion

Powershell strikes again and provides a simple and easy way to work with XML output. I actually did another script that prints the size of the repository folder by getting a sum of the “size” nodes, but I will leave that as an exercise for the reader (hint: Measure-Object Cmdlet and the -Sum Parameter would be useful).

An Easy Win for Testable Methods

One of our developers was having an issue testing a service. Basically, he was having a hard time hitting the service as it is controlled by an external company and we don’t have firewall rules to allow us to easily reach it in our local environment. I suggested mocking the service since we really weren’t testing getting a response from the service, but what we do with the response. I was told that the code is not conducive to mocking. So, I took a look and they were right, but the fix to make it testable was a very simple refactor. Here is the gist of the code:

public SomeResponseObject GetResponse(SomeRequestObject request)
{
//Set some additonal properties on the request
request.Id = "12345";

//Get the response from the service
SomeResponseObject response = Client.SendRequest(request);

//Do something with the response
if (response != null)
{
//Do some awesome stuff to the response
response.LogId = "98765";
if (response.Id > "999")
{
response.Special = true;
}
LogResponse(response);
}

return response;
}

What we want to test is the “Do something with the response” section of the code, but this method is doing so many things that we can’t isolate that section and test it…or can we? To make this testable we simply move everything that conserns “Do something with the response” to a separate method.

public SomeResponseObject GetResponse(SomeRequestObject request)
{
//Set some additonal properties on the request
request.Id = "12345";

//Get the response from the service
SomeResponseObject response = Client.SendRequest(request);

return ProcessResponse(response);
}
public SomeResponseObject ProcessResponse(SomeResponseObject response)
{
//Do something with the response
if (response != null)
{
//Do some awesome stuff to the response
response.LogId = "98765";
if (response.Id > "999")
{
response.Special = true;
}
LogResponse(response);
}

return response;
}

Now we can just test the changes to ProcessResponse method in isolation away from the service calls. Since there were no changes to the service or the service client, we didn’t have to worry about testing them for this specific change. We don’t care what gets returned we just want to know if the response was properly processed and logged. We still have a hard dependency on LogResponse’s connection to the database, but I will live with this as an integration test and fight for unit tests another day. This is a quick win for testability and a step closer to making this class SOLID.

Configure Remote Windows 2012 Server

Have you ever needed to inspect or configure a server and didn’t want to go through the hassle of remoting into the server? Me too. Well as I take a deeper dive into the bowels of PowerShell 4 I found a cmdlet that allows me to issue PowerShell commands on my local machine and have them run on the remote server. I know your excited, I couldn’t contain myself either. You will need PowerShell 4 and a Windows 2012 server that you have login rights to control. I am going to give you the commands to get you started and then you can Bing the rest, but its pretty simple. Once you established the connection, you just issue PowerShell commands just as if you were running them locally. Basically, you can configure your remote server from your local machine. You don’t even need to activate the GUI on the server. You can just drive it all from PowerShell and save the resources needed with the GUI.

Security

Is it secure? About as secure as you remoting into the server through a GUI. Yet, there is a difference in the vulnerabilities that you have to deal with. Security will always be an issue. This is something I will have to research more, but I do know that you can encrypt the traffic and keep the messages deep inside your DMZ.

Code

Note: Anything before the > is part of the command prompt.

PS C:\> Enter-PSSession -ComputerName server01
[server01]: PS C:\Users\CharlesBryant\Documents>

This starts the session. Notice that the command prompt now has the server name in braces and I am in my documents folder on the server.

[server01]: PS C:\Users\CharlesBryant\Documents> hostname
server01

Here I issue the host name command to make sure I’m not dreaming and I am actually on the server. Yup, this is really happening.

[server01]: PS C:\Users\CharlesBryant\Documents> Get-EventLog -list | Where-Object {$_.logdisplay name -eq "Application"}
Max(K) Retain OverflowAction Entries Log
------ ------ -------------- ------- ---
4,096 0 OverwriteAsNeeded 3,092 Application

Yes…I just queried the event log on a remote server without having to go through the remote desktop dance. BooYah! To end your session is even easier.

[server01]: PS C:\Users\CharlesBryant\Documents> Exit

Enjoy.

Configure MSDTC with PowerShell 4.0

Continuing on the PowerShell them from my last post, I wanted to save some knowledge on working with DTC in PowerShell. I am not going to list every command, just what I’ve used recently to configure DTC. You can find more infomarion on MSDN, http://msdn.microsoft.com/en-us/library/windows/desktop/hh829474%28v=vs.85%29.aspx or TechNet, http://technet.microsoft.com/en-us/library/dn464259.aspx.

View DTC Instances

Get-DTC will print a list of DTC instances on the machine.

PS> Get-Dtc

Stop and Start DTC

Stop

PS> Stop-Dtc -DtcName Local

Stopping DTC will abort all active transactions. So, you will get asked to confirm this action unless you turn off confirmation.

PS> Stop-Dtc -DtcName Local -Confirm:$False

Start

PS> Start-Dtc -DtcName Local

Status

You could use a script to confirm that DTC is started or stopped. When you call Get-Dtc and pass it an instance name it will return a property named “Status”. This property will tell you if the DTC instance is Started or Stopped.

PS> Get-Dtc -DtcName Local

Network Settings

You can view and adjust DTC Network Settings.

View

To veiw the network setting:

PS> Get-DtcNetworkSetting -DtcName Local

-DtcName is the name of the DTC instance.

Set

To set the network settings:

PS> Set-DtcNetworkSetting -DtcName Local -AuthenticationLevel Mutual -InboundTransactionsEnabled $True -LUTransactionsEnabled $True -OutboundTransactionsEnabled $True -RemoteAdministrationAccessEnabled $False -RemoteClientAccessEnabled $True -XATransactionsEnabled $False

Here we set the name of the instance to set values for then list the property value pairs we want to set. $True/$False are PowerShell parameters that return the boolean values for true or false respectively. If you try to run this set command, you will get a message asking if you want to stop DTC. I tried first stopping DTC then running this command and it still presented the confirmation message. You can add -Confirm:$False to turn off the confirmation message.

Conclusion

There is a lot more you can do, but this fits my automation needs. The only thing I couldn’t figure out is how to set the DTC Logon Account. There maybe a magical way of finding the registry keys and setting them or something, but I couldn’t find anything on it. If you know, please share…I’ll give you a cookie.

http://www.sqlha.com/2013/03/12/how-to-properly-configure-dtc-for-clustered-instances-of-sql-server-with-windows-server-2008-r2/ – Has some nice info on DTC and DTC in a clustered SQL Server environment. He even has a PowerShell script to automate configuration…Kudos. Sadly, his script doesn’t set Logon Account.

 

Working with the Windows Registry with Powershell 4.0

I figured I would rehash some of the learning I did on working with the registry with PowerShell. Most of my research on this topic was on a couple technet pages:

There is nothing really new here, just trying to commit what I learned at technet to my brain.

WARNING: Editing your registry is dangerous. Make sure you know what your doing, document your changes, and have a backup so you can revert when you mess up.

The first interesting tidbit I learned was that PowerShell looks at the registry like it is a drive and working with the registry is similar to working with files and folders. The big difference is all of the keys are treated like folders and the registry entries and values are properties on the key. So, there is no concept of a file when working with the registry.

Viewing Registry Keys

Just like working with the file system in PowerShell, we can use the powerful Get-ChildItem command.

PS> Get-ChildItem -Path hkcu:\

Interesting right? hkcu is the HKEY_CURRENT_USER registry and its treated like a drive with all of its keys as folders under the drive. Actually, hkcu is a PowerShell drive.

PowerShell Drives

PowerShell creates a data store for PowerShell drives and this allows you to work with the registry like you do the file system.

If you want to view a list of the PowerShell drives in your session run the Get-PSDrive command.

PS> Get-PSDrive

Did you notice the other drives like Variable and Env? Can you think of a reason to use the Env drive to get access to Path or other variables?

Since we are working with a drive we can achieve the same results that we did with Get-Children with basic command line syntax.

PS> cd hkcu:\
PS> dir

Path Aliases

We can also represent the path with the registry provider name followed by “::” registry. The registry provider name is Microsoft.Powershell.Core\Registry and can be shortened to Registry. The previous example can be written as:

PS> Get-ChildItem -Path Microsoft.Powershell.Core\Registry::HKEY_CURRENT_USER
PS> Get-ChildItem -Path Registry::HKCU

The first syntax is much easier, but having the provider is more verbose and explicit in what is happening (less comments in the code to explain what’s happening).

More Get-ChildItem Goodies

The examples above only list the top level keys under the path. If you want to list all keys you can use the -Recurse parameter, but if you do this on a path with many keys you will be in for a long wait.

PS> Get-ChildItem -Path Registry::HKCU -Recurse

We can use the Set-Location command to set the location of the registry. With a location set we can use “.” in the path to refer to the current location and “..” for the parent folder.

PS> Set-Location -Path Registry::HKCU\Environment
PS> Get-ChildItem -Path .
PS> Get-ChildItem -Path ..\Keyboard Layout

Above, we set the location to the Environment key, then we get the items for the key using only “.” as the path, then we get the items in another key using the “..” to represent the parent key and indicating the key under the parent we want to get items for.

When using Get-ChildItem on the registry we have its parameters at our disposal, like Path, Filter, Include, and Exclude. Since these parameter only work against names we have to use more powerful cmdlet’s to get more meaningful filtering done. In the example provide on technet, we are able to get all keys under HKCU:\Software with no more than one subkey and exactly four values:

 PS> Get-ChildItem -Path HKCU:\Software -Recurse | Where-Object -FilterScript {
     ($_.SubKeyCount -le 1) -and ($_.ValueCount -eq 4) 
}

Working with Registry Keys

As we saw registry keys are PowerShell items. So, we can use other PowerShell item commands. Keep in mind that you can represent the paths in any of the ways that we already covered.

Copy Keys

Copy a key to another location.

PS> Copy-Item -Path Registry::HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion -Destination Registry::HKCU -Recurse

This copies the keys in the Path to the Destination. Since we added Recurse all of the keys, not just the top level, will be copied.

Creating Keys

Create a new key.

PS> New-Item -Path Registry::HKCU\_DeleteMe

Deleting Keys

Delete a key.

PS> Remove-Item -Path Registry::HKCU\_DeleteMe\* -Recurse

This will remove all items under _DeleteMe. \* is telling PowerShell to delete the items, but keep the container. If we didn’t use \* the container, _DeleteMe, would be removed too. -Recurse will remove all items in the container, not just the top level items. If we attempted to remove without adding the -Recurse parameter and the item has child items we would get a warning that we are about to remove the item and all of its children. -Recurse hides that message.

Working with Registry Entries

Working with registry keys is simple because we get to use the knowledge we know about working with the file system in PowerShell. One problem is that registry entries are represented as properties of registry key items. So, we have to do a little more work to deal with entries.

List Entries

The easiest way, IMHO, to view registry entries is with Get-ItemProperty.

PS>  Get-ItemProperty -Path Registry::HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion

This will list all of the properties for the key with PowerShell related keys prefixed with “PS”.

Get Single Entry

To get a single entry we use the same Get-ItemProperty and add a Name property to specify the entry we want to return.

PS> Get-ItemProperty -Path HKLM:\Software\Microsoft\Windows\CurrentVersion -Name DevicePath

This will return just the DevicePath entry along with the related PS properties.

Create New Entry

We can add a new registry key entry with the New-ItemProperty command.

PS> New-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion -Name PowerShellPath -PropertyType String -Value $PSHome

A little more complexity to this operation, but still not rocket science. We added two more properties. PropertyType signifies the type of property to create and it must be a Microsoft.Win32.RegistryValueKind (how to deal with 64bit is something I haven’t dealt with so I leave it to you for now). Value in the example uses a PowerShell value $PSHome which is the install directory of PowerShell. You can use your own values or variables for -Value.

PropertyType Value Meaning
Binary Binary data
DWord A number that is a valid UInt32
ExpandString A string that can contain environment variables that are dynamically expanded
MultiString A multiline string
String Any string value
QWord 8 bytes of binary data

Rename Entry

To rename an entry just specify the current name and the new name.

PS> Rename-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion -Name PowerShellPath -NewName PSHome -passthru

NewName is the new name for the entry. The passthru parameter is optional and it is used to display the renamed value.

Delete Entry

To delete an entry just specify the name.

PS> Remove-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion -Name PSHome

Conclusion

Is it really this easy? Why yes it is young Padawan…yes it is. You too can control the force to destroy your registry with PowerShell :).

 

Scripting Builds with C#, Yes I Said Scripting

If you haven’t partaken of the delicious goodness that is Roslyn, don’t fret it’s easy to get in on the fun. Have you heard of ScriptCS? It’s kind of what I hoped Powershell would become, scripting with C#. No stinking compiling and complex builds, no having to learn new complex syntax and functionality, just code in C# and go. This is what ScriptCS brings by way of Roslyn. I had fun just writing C# and running it, but I needed to find a practical reason to script C# in the real world.

Then it hit me. I was in the middle of writing a build script for a project and I wondered how I could do it with ScriptCS. Looking at my NAnt script I started looking for a way to port my build script to ScriptCS and failed to envision an easy way to do it. So, I ended up doing some searching and stumbled upon Nake (actually I think a co-worker may have told me about it, can’t remember) . As the author, Yevhen Bobrov, describes Nake, “Write you build automation scripts in C# without paying the angle bracket tax!” According to Yevhen, he uses the ScriptCS pre-processing engine and takes advantage of Roslyn’s syntax re-writing features to rewrite task invocations.

Enough talk, let’s code. We will start with a build in NAnt using the sample from nant.org.

<?xml version="1.0"?>
<project name="Hello World" default="build" basedir=".">
<description>The Hello World of build files.</description>
<property name="debug" value="true" overwrite="false" />
<target name="clean" description="remove all generated files">
<delete file="HelloWorld.exe" failonerror="false" />
<delete file="HelloWorld.pdb" failonerror="false" />
</target>
<target name="build" description="compiles the source code">
<csc target="exe" output="HelloWorld.exe" debug="${debug}">
<sources>
<includes name="HelloWorld.cs" />
</sources>
</csc>
</target>
</project>

We can do similar in Nake like so:

using System;
using Nake;
//The Hello World of build files.
public static string basedir = ".";
public static string configuration = "Debug";
public static string platform = "AnyCPU";
[Task] public static void Default()
{
Build();
}
//remove all generated files
[Task] public static void Clean()
{
File.Delete("HelloWorld.exe");
File.Delete("HelloWorld.pdb");
}
//compiles the source code
[Task] public static void Build()
{
Clean();
MSBuild
 .Projects(HelloWorld.csproj)
 .Property("Configuration", configuration) 
 .Property("Platform", platform)
 .Targets(new[] {"Rebuild"})
 .BuildInParallel();
}

I really like Nake, it feels like regular C# coding to me. There may be more lines, but its easily readable lines IMHO. Not to mention, I have access to the power of C# and not just features added to a scripting tool like NAnt.

After working with Nake for a little while I found another project that targets task scripting with C#, Bau. Bau’s author, Adam Ralph, tags the project with “The C# task runner.” It’s built as a scriptcs script pack and is inspired by Rake, Grunt and gulp.” He uses an interesting approach where tasks are chained together like a long fluent build train. I haven’t had the chance to actually use Bau, but I read through some of the source and documentation. Having to chain tasks together in Bau seems foreign and limiting as I am not sure how to achieve a reusable, composable design in a manner that I am accustom to. It’s probably very simple, just not readily apparent to me like it is in Nake.

Well, I’ll keep it short. It’s good to see that there are options beginning to emerge in this space. I hope the community contributes to them. Both Nake and Bau open up scripting builds to C# developers. We get to leverage what we know about C# and script with syntax and tools we are already familiar with. We get the task based nature of NAnt with the familiarity of C#. So, if you aren’t ready to take the plunge in Roslyn, how about testing the waters with C# build scripting.

Footnote: Nake hasn’t had any commits in 5 months, Yevhen lists his location as Kiev, Ukraine. Yevhen, I have been watching the news about all of the violence happening in the Ukraine and if you are there I hope that you are OK and thanks for Nake.

Visual Studio Conditional Build Events

I wanted to run a Post Build Event on Release build only. I have never done a conditional event, but found out that it isn’t that difficult. From what I have found so far there are two ways to accomplish this.

If you defined your Post Build Event in the Project Configuration, Build Event screen in Visual Studio, you can add a conditional if statement to define the condition you want the event to run on.

if $(ConfigurationName) == Release (
copy $(TargetPath) $(SolutionDir)\Plugins\$(TargetFileName)
)

In this example I compare the $(ConfigurationName) property to the text “Release”. You could replace this with the name of the build configuration you want to run your post build script on. A note on build events, they are translated to batch files then ran so you could do anything that you could do in a bat in your event (this is a big assumption as I haven’t ran every command in a build event yet, but I strongly suspect that its safe to assume most cases will be OK).

If you define your build event directly in the Project file you could

<PropertyGroup Condition=" '$(Configuration)' == 'Release' ">
    <PostBuildEvent>copy $(TargetPath) $(SolutionDir)\Plugins\$(TargetFileName)</PostBuildEvent>
</PropertyGroup>

If you haven’t used Build Events you should check them out as you can bend your build to your will. You can preprocess files before your build, move files after the build, clean directories…basically anything you can do with a batch file you can do in a Build Event, because it is a batch file.

Reference

Build Events – http://msdn.microsoft.com/en-us/library/ke5z92ks.aspx

Batch Files – http://technet.microsoft.com/en-us/library/bb490869.aspx

Build Event Macros – http://msdn.microsoft.com/en-us/library/42x5kfw4.aspx