Category: DevOps

Install IIS with PowerShell

Here is another PowerShell command. This one is for installing IIS.

First I establish a session with the server I want to install to:

PS> enter-pssession -computername winbuildserver1

Next we just need to run a simple command:

winbuildserver1: PS> Install-WindowsFeature Web-Server -IncludeManagementTools -IncludeAllSubFeature -Source E:\sources\sxs

In the example I am installing the IIS web server, including the management tools and all sub-features, and I am installing from a specific source path, easy-peasy.

Of course you can achieve more fine grained control of the install and you can get more information on that at:

http://technet.microsoft.com/en-us/library/jj205467.aspx

Manage Windows Services with PowerShell

This is just a quick post to document some PowerShell commands so I don’t forget where they are. One of them wasn’t as easy to find it as I thought it should be (Mr. Delete Service). If you want to delete a Windows Service, how do you do it with PowerShell? You can use WMI, but PowerShell also includes some more friendly methods for working with services that aren’t that hard to find.

Delete Service

PS> (Get-WmiObject win32_service -filter "name='Go Agent 2'").Delete()

Here I am deleting one of my Go.cd Agent Services. The only item I change from service to service in this command is the “name=”, everything else has been boilerplate so far, but there are other parameters you can set. One thing I noticed is that if the service is started you have to first stop it for the delete to complete, otherwise it is just marked for deletion.

You can get more info on PowerShell WMI here:
http://msdn.microsoft.com/en-us/library/dd315295.aspx
http://msdn.microsoft.com/en-us/library/aa384832(v=vs.85).aspx

New Service

PS> New-Service -Name "Go Agent 2" -Description "Go Agent 2" -BinaryPathName "`"D:\Go Agents\2\cruisewrapper.exe`" -s `"D:\Go Agents\2\config\wrapper-agent.conf`""

Here I am creating the Go Agent. Notice that I am able to set additional command parameters in the binaryPathName, like the -s to set my config file above. I use the back tick (`) to escape quotes.

Start Service

PS> start-service -name "Go Agent 2"

This is a simple command that just needs the service name. You only need the double quotes if your name has spaces.

Stop Service

PS> stop-service -name "Go Agent 2"

This is another simple one just like start.

Conclusion

Don’t remote into your server anymore to manage your services. Run remote PowerShell commands.

 Update

They say “Reading is Fundamental” and the delete service answer I was looking for was at the bottom of the page I learned about creating services, http://technet.microsoft.com/en-us/library/hh849830.aspx. It even lists another command to delete services:

PS> sc.exe delete "Go Agent 2"

GoCD: Integrating Bug Tracking

Go.cd allows you to integrate with your tracking tools. You can define how to handle ticket numbers in your source control commit messages. Go.cd parses the message looking for a specific pattern and transforms matches into links to your tracking tool.

We use AxSoft OnTime and integration wasn’t as straight forward as I envisioned. OnTime uses two query strings to identify the ticket number and the type of ticket (defect, feature, incident…).

View Defect: https://name.ontimenow.com/viewitem.aspx?id=102578&type=defects

Create Defect: https://name.ontimenow.com/edititem.aspx?type=defects

From what I can tell Go.cd only allows use of one parameter and no facility to expand the regex and parameter to work with the various patterns for the type of ticket. Example: we may have a defect ticket OTD 102578 and a feature OTF 87984. When commits are related to either one of these tickets, the ticket number, including the prefix, is added to the commit message. To turn this into a link in Go.cd, we have to parse the OT* and correlate that to defect or feature, depending on the value of *, and add that to the type query string parameter. Next we have to grab the numbers after the space after the ticket type prefix and add that to the id query string parameter.

I am not really sure how to get this to work, aside from messing around with the source code. Did I say source code? Solved! Well maybe…tune in.

GoCD: Pipeline Parameters

I am currently getting a Go Continuous Delivery Server stood up and I am reusing scripts from our CCNET Continuous Integration server to save time getting the new server up. In doing this, I am also reviewing the scripts for improvements.When defining builds for my shinny new Go server I noticed that I have duplicate properties across tasks when calling Nant targets.

I have a property that defines the server environment the task should work with. This basically, sets properties in the script so that they target the correct environment. If I set the property to dev, the script will set the server names, paths and more to point to the dev environment.

There is a property that tells the task what source code repository branch to use. In the context of a task, this mainly has to do with paths to the branch already checked out and updated on the Go server and not controlling the actual branch updates. The branch name is concatenated in a common path so the task knows where to get and save source files.

There are more duplicated properties, but it got me wondering if there is a better way to do this than repeating the same value over and over for each task. When I need a new task or a new pipeline with different values this can become a maintenance nightmare. Not to mention, this duplication will make it hard to take advantage of Go Pipeline Templates so I need to solve this, if I want to gain the ease in creating new pipelines that templates afford.

Go has the concept of Environment Variable that is a common value saved on an agent. That is a nice feature as it lets me define values common to every pipeline running on an agent targeting a specific environment. To fix my issue I need to be able to set values at the job level. The agent level is to broad as I can have multiple pipelines target specific agents with different values for the properties I want to abstract. I wonder if we can set up a common variable at the job or pipeline level that the tasks can use?

Go Pipeline Parameters

Oh look, there is something called a Parameter in the pipeline configuration.

Parameters help reduce repetition within your configurations and combined with templates allow you to setup complex configurations. More..

Well let’s look into that and see what we come up with. The “More” link above gives the details of how to use parameters in Go. Basically, you define a parameter then you can use them in your tasks. To use the parameter in your task, just wrap it with #{}. Example: I have a parameter named ENV and I would use it in my script by tokenizing it like so, #{ENV}.

use_go_pipeline_parameter_in_task

This gets rid of a bunch of unnecessary duplication, improves the maintainability of build scripts, and opens up the possibility of creating templates.

Conclusion

There is so much more to understand about Go. One thing that was reinforced in this exercise was to remember to constantly look for pain points in solutions and processes and search for ways to cure the pain.

Scripting New Server Builds

What I am going to talk about is probably common knowledge in IT land and maybe even common knowledge to a lot of developers, but it just recently occurred to me as a good idea. Scripting new server builds hit me like a ton of bricks when I saw it being done by one of our developers, we will call him Foo since I didn’t ask permission to use his name.

Revelation

Foo was recording the scripts he used to build the server in a text document along with additional steps he took to get the server up. He also stored this text document in source control so that the instructions are versioned. Did I say genius…very smart guy. Maybe I’m just a developer caveman and watching him burning that tree made me curious and want to share it with my family. So, I grab a burning branch, run back to the cave and look…I invented fire! OK, maybe not that deep, but a it was a revelation that missed connecting with me although I have been exposed to the idea before.

Framework

In my quest to become a better Automation Engineer, I learned about server virtualization, how to optimize server images, strategies for provisioning and teardown of virtual instances, and more. I even learned about scripting and automating server configuration, but I didn’t dive into the depths of any of the subjects. As I looked through the text files that Foo had in source control, a lightbulb went off and everything connected. So, I grab the text files, blog about them, and wait for it…….I invented scripting new server builds! At least I invented it in the small world in my mind.

The idea I am exploring now is to use the same logic that the Foo used and expand upon it so that it can be used in provisioning virtual instances. I could cozy up to ChefPuppet, or System Center and stop writing this post and do some research to figure out best practices and various strategies for doing this, but where is the fun in that. So, let’s blog it out, get the basics, then find a better way as I feel the overwhelming weight of what I got myself into. Even if I do end up using a boxed tool, knowing how to do this manually and hand rolling my own basic automation will make me that much more dangerous when I get the power of a tool in my hand. So, I enter the 36th Chamber.

Requirements

First thing I want to do is reorganize how this is done so, lets set out some requirements.

  • I want scripts that are runnable, right now I have to copy and paste them into a script file to get them to run. So, the scripts should be in script files that can be easily ran by a script engine.
  • I want the scripts to be reusable so I won’t create a file that contains every step, but many files that can be called by other scripts and customized by passing in arguments. This gives me a way to compose a server build without having do duplicate major functionality, a lot easier to maintain and optimize.
  • I want the scripts to be generic enough to use across multiple types of server instances, but custom enough that we don’t create a massive mess that is just a dumb abstraction on top of the scripting engine and its plugins. It is important that the scripts do more than just call another external function in the script engine o a plugin, there should be some additional logic that makes it worthwhile to script.

Additional goals,

  • I want to log the tasks and exceptions while running the scripts. Especially, timing and contextual data so that I can monitor and analyze script execution.
  • I want to notify engineers when there is a problem or a manual steps that have to be done. When we hit a step that is in distress or needs manual intervention, I want the script to send an email, IM, tweet…or something to the engineer or group managing the provisioning of the server.
  • I want this to be scalable and performant, but the initial iterations should focus on getting it to work when thinking about provisioning just one instance. Scaling maybe better solved with a third party tool and I will face that issue when I face the scaling problem or at least at a point that I can project and understand the impending scaling issue.

Workflow

I guess that is enough to get me going. So, I take on a couple of the steps to stand the server up, install and start services

  • I run the manual steps to install a service
  • I script the manual install steps
  • I run the install script on the server
  • I run the manual steps to uninstall the service
  • I script the manual uninstall steps
  • I run the install script on the server
  • I run the uninstall script on the server

As I take each step, I am researching how to best script it, I am addressing issues, because it rarely goes as planned. This is a poor man’s script, debug and test methodology. I am sure that there must be some fancy IDE that can help with this. I am configuring the servers remotely with Powershell from my local environment. I’m a DotNetter, but I can see myself doing this with any scripting engine, on any platform, with any supporting tools to make it easier.

Iterate & Elaborate

I repeat the workflow to script out service start and stop. After I am satisfied, I save the scripts in a file named config_services.ps1 and change the scripts so they can accept arguments. Now I have a script whose focus is to manage services. Then I check them into source control.

Next, I create another script whose job it is to orchestrate the workflow to configure the server using scripts like the config_services.ps1 script. I hard code the arguments in the call to the install service function, but you know I’m thinking about how to get away from the hard coding, but I don’t want to go deeper down the rabbit hole than I have to. Speaking of the rabbit hole, how do I unit test a PowerShell script? I save this file as configure_server.ps1 and I commit it.

That was fun, but we need to do a lot more to configure a server. So, I take another task, configure DTC, and I follow my development workflow to script out the manual steps. This involved a little registry manipulation so I also created a script to manage the registry too. Then I added calls to these scripts in the configure_server.ps1 script inside the same function that is calling the install and start services functions. Now, I have three of the steps to configure this server instance scripted with somewhat generic encapsulated functions. This satisfies that major goals in my requirements.

Although, I have ideas for refactoring this, I stop at this branch and switch gears to stub out a script that can log messages and send alerts. Then I add calls to all of the scripts to get some messaging instrumented and ready when I am done with the feature. I’m feeling good about myself and I continue working in this manner until I have a solution to automate the configuration of a server instance.

Conclusion

That’s it for now. I know you are like, wait…where are the damn scripts. I might share them on GitHub if someone needs them, but I didn’t feel like digging them up and cleaning them up to add to this post and GitHub.

If you are just getting into scripting servers like me, I hope this helps spark a flame for you as you think it through. If you are a Monk of the 36th Chamber and you see all kinds of issues and naive assumptions that I shouldn’t be publicizing to unknowing newbies, please let me know. If you are looking to become better at automating the software delivery pipeline, drop me a line, I am always looking for someone to spar and train with.

 

Finding Bad Build Culprits…Who Broke the Build!

I found an interesting Google Talk on finding culprits automatically in failing builds – https://www.youtube.com/watch?v=SZLuBYlq3OM. This is actually a lightening talk at GTAC 2013 given by grad students Celal Ziftci and Vivek Ramavajjala. First they gave an overview of how culprit analysis is done on build failures triggered by small test and medium sized tests.

CL or change list is a term I first heard in “How Google Tests Software” and refers to a logical grouping o changes committed to the source tree. This would be like a git feature branch.

Build and Small Tests Failures

When the build fails because of a build issue we build the CLs separately until a CL fails the build. When the failure is a small test (unit test) we do the same thing. Build CLs separately and run the tests against them to find the culprit. In both cases, we can do the analysis in parallel to speed it up. This is what I covered in my post on Bisecting Our Code Quality Pipeline where git bisect is used to recurse the CLs.

Medium Tests

Ziftci and Ramavajjala define these tests as taking less than 8 minutes to run and suggest using a binary search to find the culprit. Target the middle CL, build it and if it fails, the culprit is most likely to the left, so we recurse to the left until we find the culprit. If it passes, we recurse to the right.

CL 1 – CL 2 – CL 3 – CL 4 – CL 5 – CL6

CL 1 is the last known passing CL. CL 6 was the last CL in the failing build. We start by analyzing CL 4 and if fails, then we move left and check CL 3. If CL 3 passes, we mark CL 4 as the culprit. If CL 3 fails we mark CL 2 as the culprit because we know that CL 1 was good and don’t need to continue analyzing.

If CL 4 passed, we would move right and test CL 5 and if it fails, mark CL 5 as the culprit. If it passes, then we mark CL 6 as the culprit because it is the last suspect and we don’t have to waste resources analyzing it.

Large Tests

They defined these tests as taking longer than 8 minutes to run. This was the primary focus of Ramavajjala and Ziftci’s research. They are focusing on developing heuristics that will let a tool identify culprits by pattern matching. They explained how they have a heuristic that will analyze a CL for number of files changed and give a higher ranking to CLs with more files changed.

They also have a heuristic that calculates the distance of code in the CL from base libraries, like the distance from the core Python library for example. The closer it is to the core the more likely that it is a core piece of code that has had more rigorous evaluation because there may be many projects depending on it.

They seemed to be investing a lot of time into insuring that they can do this fast. They stress caching and optimizing how they do this. It sounds interesting and once they have had a chance to run their tool and heuristics against the massive amount of tests at Google (they both became employees of Google) hopefully they can share the heuristics that prove to be most adept at finding culprits at Google and maybe anywhere.

Thoughts

They did mention possibly using a heuristic that looks at the logs generated by build failures to identify keywords that may provide more detail on who the culprit maybe. I had a similar thought after I wrote the git bisect post.

Many times when a test fails in larger tests there are clues left behind that we would normally manually inspect to find the culprit. If the test has good messaging on their assertions, that is the first place to look. In a large end to end test there may be many places for the test to fail, but if the failure message gives a clue of what fails it helps to find the culprit. Although, they spoke of 2 hr tests and I have never seen one test that takes 2 hours so what I was thinking about and what they are dealing with may be another animal.

There is also the test itself. If the test covers a feature and I know that only one file in one CL is included in the dependencies involved in the feature test, then I have a candidate. There is also application logs and system logs. The goal as I saw it was to find a trail that led me back to a class, method, or file that matches a CL.

The problem with me trying to seriously try and solve this is I don’t have a PhD in Computer Science, actually I don’t have a degree except from the school of hard knocks. When they talked about the binary search for medium sized tests it sounded great. I kind of know what a binary search is. I have read about it and remember writing a simple one years ago, but if you ask me to articulate the benefits of using quad tree instead of binary search or to write a particular one on the spot, I will fumble. So, trying to find an automated way to analyze logs in a thorough, fast and resource friendly manner is a lot for my brain to handle. Yet, I haven’t let my shortcomings stop me yet, so I will continue to ponder the question.

We are talking about parsing and matching strings, not rocket science. This maybe a chance for me to use or learn a new language more adept at working with strings than C#.

Conclusion

At any rate I find this stuff fascinating and useful in my new role. Hopefully, I can find more on this subject.

 

Configure Remote Windows 2012 Server

Have you ever needed to inspect or configure a server and didn’t want to go through the hassle of remoting into the server? Me too. Well as I take a deeper dive into the bowels of PowerShell 4 I found a cmdlet that allows me to issue PowerShell commands on my local machine and have them run on the remote server. I know your excited, I couldn’t contain myself either. You will need PowerShell 4 and a Windows 2012 server that you have login rights to control. I am going to give you the commands to get you started and then you can Bing the rest, but its pretty simple. Once you established the connection, you just issue PowerShell commands just as if you were running them locally. Basically, you can configure your remote server from your local machine. You don’t even need to activate the GUI on the server. You can just drive it all from PowerShell and save the resources needed with the GUI.

Security

Is it secure? About as secure as you remoting into the server through a GUI. Yet, there is a difference in the vulnerabilities that you have to deal with. Security will always be an issue. This is something I will have to research more, but I do know that you can encrypt the traffic and keep the messages deep inside your DMZ.

Code

Note: Anything before the > is part of the command prompt.

PS C:\> Enter-PSSession -ComputerName server01
[server01]: PS C:\Users\CharlesBryant\Documents>

This starts the session. Notice that the command prompt now has the server name in braces and I am in my documents folder on the server.

[server01]: PS C:\Users\CharlesBryant\Documents> hostname
server01

Here I issue the host name command to make sure I’m not dreaming and I am actually on the server. Yup, this is really happening.

[server01]: PS C:\Users\CharlesBryant\Documents> Get-EventLog -list | Where-Object {$_.logdisplay name -eq "Application"}
Max(K) Retain OverflowAction Entries Log
------ ------ -------------- ------- ---
4,096 0 OverwriteAsNeeded 3,092 Application

Yes…I just queried the event log on a remote server without having to go through the remote desktop dance. BooYah! To end your session is even easier.

[server01]: PS C:\Users\CharlesBryant\Documents> Exit

Enjoy.

Configure MSDTC with PowerShell 4.0

Continuing on the PowerShell them from my last post, I wanted to save some knowledge on working with DTC in PowerShell. I am not going to list every command, just what I’ve used recently to configure DTC. You can find more infomarion on MSDN, http://msdn.microsoft.com/en-us/library/windows/desktop/hh829474%28v=vs.85%29.aspx or TechNet, http://technet.microsoft.com/en-us/library/dn464259.aspx.

View DTC Instances

Get-DTC will print a list of DTC instances on the machine.

PS> Get-Dtc

Stop and Start DTC

Stop

PS> Stop-Dtc -DtcName Local

Stopping DTC will abort all active transactions. So, you will get asked to confirm this action unless you turn off confirmation.

PS> Stop-Dtc -DtcName Local -Confirm:$False

Start

PS> Start-Dtc -DtcName Local

Status

You could use a script to confirm that DTC is started or stopped. When you call Get-Dtc and pass it an instance name it will return a property named “Status”. This property will tell you if the DTC instance is Started or Stopped.

PS> Get-Dtc -DtcName Local

Network Settings

You can view and adjust DTC Network Settings.

View

To veiw the network setting:

PS> Get-DtcNetworkSetting -DtcName Local

-DtcName is the name of the DTC instance.

Set

To set the network settings:

PS> Set-DtcNetworkSetting -DtcName Local -AuthenticationLevel Mutual -InboundTransactionsEnabled $True -LUTransactionsEnabled $True -OutboundTransactionsEnabled $True -RemoteAdministrationAccessEnabled $False -RemoteClientAccessEnabled $True -XATransactionsEnabled $False

Here we set the name of the instance to set values for then list the property value pairs we want to set. $True/$False are PowerShell parameters that return the boolean values for true or false respectively. If you try to run this set command, you will get a message asking if you want to stop DTC. I tried first stopping DTC then running this command and it still presented the confirmation message. You can add -Confirm:$False to turn off the confirmation message.

Conclusion

There is a lot more you can do, but this fits my automation needs. The only thing I couldn’t figure out is how to set the DTC Logon Account. There maybe a magical way of finding the registry keys and setting them or something, but I couldn’t find anything on it. If you know, please share…I’ll give you a cookie.

http://www.sqlha.com/2013/03/12/how-to-properly-configure-dtc-for-clustered-instances-of-sql-server-with-windows-server-2008-r2/ – Has some nice info on DTC and DTC in a clustered SQL Server environment. He even has a PowerShell script to automate configuration…Kudos. Sadly, his script doesn’t set Logon Account.

 

Bisecting Our Code Quality Pipeline

I want to implement gated check-ins, but it will be some time before I can restructure our process and tooling to accomplish it. What I really want is to be able to keep the source tree green and when it is red provide feedback to quickly get it green again. I want to run tests on every commit and give developers feedback on their failing commits before it pollutes the source tree. Unfortunately, to run the tests as we have it today would take too long to test on every commit. I came across a quick blog post by Ayende Rahien on Bisecting RavenDB and they had a solution were they used git bisect to find the culprit that failed a test. They gave no information on how it actually worked just a tease that they are doing it. I left a comment to see if they would share some of their secret sauce behind their solution, but until I get that response I wanted to ponder it for a moment.

Git Bisect

To speed up testing and also allow test failure culprit identification with git bisect we would need a custom test runner that can identify what test to run and run them. We don’t run tests on every commit, we run tests nightly against all the commits that occurred for the day. When the test fails it can be difficult identifying the culprit(s) that failed the test. This is were the Ayende steps in with his team’s idea to use bisect to help identity the culprit. Bisect works by traversing commits. It starts at the commit we mark as the last known good commit to the last commit that was included in the failing nightly test. As bisect iterates over the commits, it pauses at each commit and allows you to test it and mark if it is good or bad. In our case we could run a test against a single commit. If it passes, tell bisect its good and to move to the next. If it fails, save the commit and failing test(s) as a culprit, tell bisect its bad and to move to the next. This will result in a list of culprit commits and their failing tests that we can use for reporting and bashing over the head of the culprit owners (just kidding…not).

Custom Test Runner

The test runner has to be intelligent enough to run all of the tests that exercise the code included in a commit. The custom test runner has to look for testable code files in the commit change log, in our case .cs files. When it finds a code file it will identify the class in the code file and find the test that targets the class. We are assuming one class per code file and one unit test class per code file class. If this convention isn’t enforced, then some tests may be missed or we have to do a more complex search. Once all of the test classes are found for the commit’s code files, we run the the tests. If a test fails, we save the test name and maybe failure results, exception, stack trace… so it can be associated with the culprit commit. Once all of the tests are ran, if any of them failed, we mark the commit as a culprit. After the test and culprit identification is complete, we tell bisect to move to the next commit. As I said before, this will result in a list of culprits and failing test info that we can use in our feedback to the developers.

Make It Faster

We could make this fancy and look for the specific methods that were changed in the commit’s code file classes. We would then only find tests that test the methods that were changed. This would make testing focused like a lazer and even faster, but we could probably employ Roslyn to handle the code analysis to make finding tests easier. I suspect tools like ContinuousTests – MightyMoose do something like this, so it’s not that far fetched an idea, but definitely a mountain of things to think about.

Conclusion

Well this is just a thought, a thesis if you will, and if it works, it will open up all kind of possibilities to improve our Code Quality Pipeline. Thanks Ayende and please think about open sourcing that bisect.ps1 PowerShell script 🙂

Working with the Windows Registry with Powershell 4.0

I figured I would rehash some of the learning I did on working with the registry with PowerShell. Most of my research on this topic was on a couple technet pages:

There is nothing really new here, just trying to commit what I learned at technet to my brain.

WARNING: Editing your registry is dangerous. Make sure you know what your doing, document your changes, and have a backup so you can revert when you mess up.

The first interesting tidbit I learned was that PowerShell looks at the registry like it is a drive and working with the registry is similar to working with files and folders. The big difference is all of the keys are treated like folders and the registry entries and values are properties on the key. So, there is no concept of a file when working with the registry.

Viewing Registry Keys

Just like working with the file system in PowerShell, we can use the powerful Get-ChildItem command.

PS> Get-ChildItem -Path hkcu:\

Interesting right? hkcu is the HKEY_CURRENT_USER registry and its treated like a drive with all of its keys as folders under the drive. Actually, hkcu is a PowerShell drive.

PowerShell Drives

PowerShell creates a data store for PowerShell drives and this allows you to work with the registry like you do the file system.

If you want to view a list of the PowerShell drives in your session run the Get-PSDrive command.

PS> Get-PSDrive

Did you notice the other drives like Variable and Env? Can you think of a reason to use the Env drive to get access to Path or other variables?

Since we are working with a drive we can achieve the same results that we did with Get-Children with basic command line syntax.

PS> cd hkcu:\
PS> dir

Path Aliases

We can also represent the path with the registry provider name followed by “::” registry. The registry provider name is Microsoft.Powershell.Core\Registry and can be shortened to Registry. The previous example can be written as:

PS> Get-ChildItem -Path Microsoft.Powershell.Core\Registry::HKEY_CURRENT_USER
PS> Get-ChildItem -Path Registry::HKCU

The first syntax is much easier, but having the provider is more verbose and explicit in what is happening (less comments in the code to explain what’s happening).

More Get-ChildItem Goodies

The examples above only list the top level keys under the path. If you want to list all keys you can use the -Recurse parameter, but if you do this on a path with many keys you will be in for a long wait.

PS> Get-ChildItem -Path Registry::HKCU -Recurse

We can use the Set-Location command to set the location of the registry. With a location set we can use “.” in the path to refer to the current location and “..” for the parent folder.

PS> Set-Location -Path Registry::HKCU\Environment
PS> Get-ChildItem -Path .
PS> Get-ChildItem -Path ..\Keyboard Layout

Above, we set the location to the Environment key, then we get the items for the key using only “.” as the path, then we get the items in another key using the “..” to represent the parent key and indicating the key under the parent we want to get items for.

When using Get-ChildItem on the registry we have its parameters at our disposal, like Path, Filter, Include, and Exclude. Since these parameter only work against names we have to use more powerful cmdlet’s to get more meaningful filtering done. In the example provide on technet, we are able to get all keys under HKCU:\Software with no more than one subkey and exactly four values:

 PS> Get-ChildItem -Path HKCU:\Software -Recurse | Where-Object -FilterScript {
     ($_.SubKeyCount -le 1) -and ($_.ValueCount -eq 4) 
}

Working with Registry Keys

As we saw registry keys are PowerShell items. So, we can use other PowerShell item commands. Keep in mind that you can represent the paths in any of the ways that we already covered.

Copy Keys

Copy a key to another location.

PS> Copy-Item -Path Registry::HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion -Destination Registry::HKCU -Recurse

This copies the keys in the Path to the Destination. Since we added Recurse all of the keys, not just the top level, will be copied.

Creating Keys

Create a new key.

PS> New-Item -Path Registry::HKCU\_DeleteMe

Deleting Keys

Delete a key.

PS> Remove-Item -Path Registry::HKCU\_DeleteMe\* -Recurse

This will remove all items under _DeleteMe. \* is telling PowerShell to delete the items, but keep the container. If we didn’t use \* the container, _DeleteMe, would be removed too. -Recurse will remove all items in the container, not just the top level items. If we attempted to remove without adding the -Recurse parameter and the item has child items we would get a warning that we are about to remove the item and all of its children. -Recurse hides that message.

Working with Registry Entries

Working with registry keys is simple because we get to use the knowledge we know about working with the file system in PowerShell. One problem is that registry entries are represented as properties of registry key items. So, we have to do a little more work to deal with entries.

List Entries

The easiest way, IMHO, to view registry entries is with Get-ItemProperty.

PS>  Get-ItemProperty -Path Registry::HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion

This will list all of the properties for the key with PowerShell related keys prefixed with “PS”.

Get Single Entry

To get a single entry we use the same Get-ItemProperty and add a Name property to specify the entry we want to return.

PS> Get-ItemProperty -Path HKLM:\Software\Microsoft\Windows\CurrentVersion -Name DevicePath

This will return just the DevicePath entry along with the related PS properties.

Create New Entry

We can add a new registry key entry with the New-ItemProperty command.

PS> New-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion -Name PowerShellPath -PropertyType String -Value $PSHome

A little more complexity to this operation, but still not rocket science. We added two more properties. PropertyType signifies the type of property to create and it must be a Microsoft.Win32.RegistryValueKind (how to deal with 64bit is something I haven’t dealt with so I leave it to you for now). Value in the example uses a PowerShell value $PSHome which is the install directory of PowerShell. You can use your own values or variables for -Value.

PropertyType Value Meaning
Binary Binary data
DWord A number that is a valid UInt32
ExpandString A string that can contain environment variables that are dynamically expanded
MultiString A multiline string
String Any string value
QWord 8 bytes of binary data

Rename Entry

To rename an entry just specify the current name and the new name.

PS> Rename-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion -Name PowerShellPath -NewName PSHome -passthru

NewName is the new name for the entry. The passthru parameter is optional and it is used to display the renamed value.

Delete Entry

To delete an entry just specify the name.

PS> Remove-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion -Name PSHome

Conclusion

Is it really this easy? Why yes it is young Padawan…yes it is. You too can control the force to destroy your registry with PowerShell :).