Orphaned Powershell PSDrive

I received this strange error while executing a script that creates a new PSDrive.

New-PSDrive : The local device name has a remembered connection to another network resource

I tried to use Remove-PSDrive, but

Remove-PSDrive : Cannot find drive. A drive with the name 'S:' does not exist.

I was able to fix this issue with the “net use” command.

First, I ran it to see if the drive was still mapped. I am still unsure how it is there between Powershell sessions, I must have missed something.

PS C:\> net use
New connections will be remembered.

Status Local Remote Network
------------------------------------------------------------------------------
Unavailable S: \\node1\d$ Microsoft Windows Network
Unavailable I: \\node2\it Microsoft Windows Network
OK P: \\public Microsoft Windows Network
The command completed successfully.

Then I ran “net use” with the delete parameter to remove the orphaned drive.

PS C:\> net use /delete S:
S: was deleted successfully.

I love it when a plan comes together.

What is this CIM I keep running into in Powershell?

I keep having to use CIM in my scripts, but what is it? I understand how to use it, but where did it come from and what does it stand for. Like every developer I know, a search engine is the best tool to solve this mystery.

There is an industry standards organization called DMTF (Distributed Management Task Force) that defined a standard named Common Information Model. By the way, this is the same group that defined MOF (Managed Object Framework) which is the standard below the covers of DTC. CIM is defined in the MOF standard and is a cross platform common definition of management information for systems, networks, and applications and services that allows for vendor extensions. How was that for acronym soup?

Mystery Solved.
52581a521605fb72a20000bb

Update PSModulePath for Custom PowerShell Module Development

I am in the process of a deep dive into DSC and I want to store my custom modules and DSC Resources in source control. To make it easy to run PowerShell modules you have to import them or have them on the PSModulePath environment variable. Since I don’t want to point a source repository to the default PowerShell module path, I want to add my custom module path to PSModulePath. This will save me some time when it comes to having to import modules and module changes. This means I will always be running the most recent version of my modules even the buggy ones, so if you do this, understand the implications.

It’s actually pretty easy to automate this with PowerShell. Since I already have some experience updating environment variables with PowerShell I just created a new script to add my custom module path to PSModulePath.

$currentModulePath = [Environment]::GetEnvironmentVariable("PSModulePath", "Machine")
$customModulePath = "C:\_DSC\DSCResources"
$newModulePath = $currentModulePath + ";" + $customModulePath
[Environment]::SetEnvironmentVariable("PSModulePath", $newModulePath, "Machine")

I complicated this script a bit so it is more self evident on what is happening (code as documentation – no comments necessary).

I can envision someone needing to also remove a path from PSModulePath, but this is enough to get started so I will leave it up to you, until I have a need for that :).

UPDATES

When running this script in an Invoke-Command on a remote session the modules aren’t immediately available if I tried to use modules in the new path. This is because the path is not updated on the remote session. A quick workaround for me was to remove the session and recreate it.

Get-PSSession | Remove-PSSession;

This removes all sessions so you may not want to do this. Since I don’t care about sessions I like it. This was just a one line change in my workflow script and it didn’t cause too much latency in the script execution. I know there are some other solutions that involve messing with the registry, but this is a one time deal so resetting the remote session works for me.

Orlando Code Camp Call for Speakers

Static Property Bug Caused by a Code Tool

With the following private properties at the top of a class.

private static readonly string taskTimeStampToken = string.Format("{0}Timestamp: ", taskToken);
private static readonly string taskToken = "[echo] @@";

What will this method in the same class print to the console when called.

public void Print()
{
 Console.Print(taskTimeStampToken);
}

For all you code geniuses that got this right away, I say shut up with your smug look and condescending tone as you say of course it prints

Timestamp:

So, why doesn’t it print

[echo] @@Timestamp:

Because taskToken hasn’t been initialized yet, of course. The order of static properties matter in your code. Don’t forget it, especially if you use a tool that reorganizes your code.

I really did know this ;), but this fact caused me about an hour of pain. I use a wonderful little tool called Codemaid and I use it to help keep my code standardized. One of its functions is to reorganize my code in a consistent format (e.g. private members above public, constructor before everything…).

I obviously never ran Codemaid on this particular file with code similar to above because unit tests have always passed. Well I had a code change in said file, Codemaid did its thing, it caused the order of the properties to flip like they are above, and unit tests started failing. It took me at least an hour before I took a deep breath and noticed that the properties flipped.

Lesson Learned

  • If you do any type of refactoring, even with a tool, make sure you have unit tests covering the file you refactor.
  • Use a diff tool to investigate mysteriously failing tests. It will give better visual clues on changes to investigate instead of relying on tired eyes.
  • Configure your code formatting tool to not reorganize your static properties, or request the feature to configure it.

GoCD: Agent Running Web Driver Test Hangs After Test Failure [SOLVED]

Problem

I had a nagging issue were some tests were failing the build on our GoCD server, but the agent was not reporting the failure to the server. We are using NAnt to run NUnit tests that in turn call Web Driver to exercise web pages. There were some test failures that correctly returned a non-zero value that failed the build in NAnt. Also, the failure is captured in the log and saved in a text file. Yet, the agent didn’t report the build failure or send the artifacts to the server.

Issue

After a 2 day search for answers and a deep dive into the bowels of GoCD I discovered that a Web Driver process was kept open after the test fails the build. Specifically, the process is IEDriverServer.exe. This process was being orphaned by improper cleanup in the tests that resulted in the Web Driver and browsers staying open after the test failure.

When I ran the tests again, I watched for the failure then manually killed Web Driver and the agent magically reported to the server. I am still unsure why Web Driver would prevent the GoCD agent from reporting to the server. They are both Java processes, maybe there is something going on in the JVM or something… not sure.

Solution

My work around at the moment is to run a task killer on failure in the test script. Here is the relevant portion of the nant script that drives the tests:

<property name="nant.onfailure" value="test.taskkiller" />
<target name="test.taskkiller">
 <exec program="taskkiller.bat" failonerror="false">
 </exec>
 </target>

The taskkiller.bat is just a simple bat file that will kill Web Driver and open browsers.

taskkill /IM IEDriverServer.exe /F
taskkill /IM iexplore.exe /F

Now this is just a band-aid. We will be updating our test framework to handle this. Additionally, killing all the processes like this isn’t good if we happen to be running tests in parallel on the agent, which may be a possibility in the future.

Sev1 Incident

I read a book called the Phoenix Project. A surprisingly good book about a company establishing a DevOps culture. One of the terms in the book that I had no experience with was Sev1 incident. I have since heard it repeated and have come to find out that it is part of a common grading of incident severity. Well, I decided to finally research it about a year after I read the book and put more thought into a formalized incident reporting, triage, mitigation, and postmortem workflow. Which is similar to the thoughts I had on triaging failing automated tests.

Severity Levels

So, first to define the severity levels. Fortunately, David Lutz has a good break down on his blog – http://dlutzy.wordpress.com/2013/10/13/incident-severity-sev1-sev2-sev3-sev4-sev5/.

Severity Levels

  • Sev1 Complete outage
  • Sev2 Major functionality broken and revenue affected
  • Sev3 Minor problem, bug
  • Sev4 Redundant component failure
  • Sev5 False alarm or alert for something you can’t fix

Identify Levels

With that I need to define how to identify the levels. IBM has a break down that simplifies it on their Java SDK site – http://publib.boulder.ibm.com/infocenter/javasdk/v1r4m2/index.jsp?topic=%2Fcom.ibm.java.doc.diagnostics.142%2Fhtml%2Fbugseverity.html:

Sev 1

  • In development: You cannot continue development.
  • In service: Customers cannot use your product.

Sev 2

  • In development: Major delays exist in your development.
  • In service: Users cannot access a major function of your product.

Sev 3

  • In development: Major delays exist in your development, but you have temporary workarounds, or can continue to work on other parts of your project.
  • In service: Users cannot access minor functions of your product.

Sev 4

  • In development: Minor delays and irritations exist, but good workarounds are available.
  • In service: Minor functions are affected or unavailable, but good workarounds are available.

Severity Analysis

Now that we have more guidance on identifying the severity of an incident, how should it be reported? I believe that anyone can report an incident, bug, something not working, but it is up to an analyst to determine the severity level of the report.

So, the first step is for the person who discovered the issue to open a ticket. Of course if it is a customer and we don’t have a self-support system, they will probably report it to an employee in support or sales and the employee will create the ticket for the customer. All tickets should be auto routed to the analyst team where it is assigned to an analyst to triage. The analyst will assign the severity level and assign to engineering support where the ticket will be reviewed, discussed and prioritized. The analyst in this instance can be a QA, BA, even a developer assigned to the task, but the point is to have a dedicated team/person responsible.

During the analysis, a time line of the the failure should be established. What led up to the failure, the changes, actions taken, and people involved should all be laid out in chronological order. Also, during triage, a description of how to recreate the failure should be written if possible. The goal is to collect as much information about the failure as possible in one place so that the team can review and help investigate. Depending on the Sev level various degrees of details and speed in which feedback is given should be established.

Conclusion

This is turning out to be a lot deeper than I care to dive into right now, but this gives me food for thought. My take aways so far are to

  • formalize severity levels
  • define how to identify the levels
  • assign someone to do the analysis and assign the levels

 

Using GitHub Behind a Proxy (Windows)

At work I am connected to the internet through a proxy. The proxy prevented me from connecting to repositories on GitHub because authentication isn’t handled properly. A co-worker recommended using the CTNLM proxy (http://cntlm.sourceforge.net/) to handle the authentication.

CNTLM works well, but he said he was having a problem with slow connections. He said he found an issue where the proxy would try multiple times to connect and would timeout and finally connect to the Git server. He noticed that it was trying to connect to local host as with ::1, like a funky empty IPv6 address. He said that adding a proxy to .gitconfig (global or systemwide config) would cause it to connect faster without having to wait for all the different connection try and failures:

[http]
proxy = http://127.0.0.1:3128
[https]
proxy = http://127.0.0.1:3128

Why does this work? I don’t have enough geek cred to know this yet, but it works and I wanted to save it here for the time I have to setup a new computer and I forget what I did.

Join the Hour of Code

Give Back

If you are a programmer, developer, software engineer… someone who writes code, think about giving back this week in honor of Computer Science Education Week by helping introduce programming to someone. President Obama kicked of this week by announcing this year’s Hour of Code. If you were under a rock last year, Hour of Code is a global movement to get kids exposed to and excited about coding.

Anybody Can Teach and Learn

Even if you aren’t a coder by trade or hobby, you can still teach and learn with these simple tutorials:

If Nothing Else, Spread the Word

If you can’t personally walk someone through some of the fun Hour of Code tutorials, the least you could do is spread the word through your social networks. Share the Hour of Code movement with others that may be in a position to help pass the torch to the future leaders of our industry.

GoCD: Versioning .Net Assemblies

I recently updated my versioning on my build server to help separate CI builds from builds that are being publicly distributed. My versioning scheme for CI builds looks like 5.4.4-239CI37380 following SemVer 2.0 this gives me Major.Minor.Patch-PreRelease. My PreRelease is the “Go Counter” + “CI” + “Source Revision Number”.

Unfortunately, assembly versions use a different scheme, Major.Minor.Build.Revision and are only allowed to have numbers and no dashes (AssemblyVersionAttribute). So, I ended up keeping the CI version for file names, but changed the assembly to just use the Major.Minor.Patch for the assembly Major.Minor.Build (you with me?). Then for to help identify different assemblies I added the Go Counter to the end.

The lesson is to only use numbers in your .Net assembly version numbers.