Category: DevOps
Cross Domain PowerShell Remoting [Fail]
I tried to run our PowerShell environment configuration scripts today and got hit with a nasty error. I double checked my credentials so I know that wasn’t the issue. The scripts worked just a month ago, but we did have some stupid security software installed on our workstations that may be adjusting how remoting works. Let’s see if I can get around it before I open a ticket and start complaining.
Here is the error. This results from a simple call to New-PSSession. The other server is in another domain, but like I said this has been working just fine.
New-PSSession : [agpjaxd1pciapp1] Connecting to remote server agpjaxd1pciapp1 failed with the following error message : WinRM cannot process the request. The following error with errorcode 0x80090311 occurred while using Kerberos authentication: There are currently no logon servers available to service the logon request. Possible causes are: -The user name or password specified are invalid. -Kerberos is used when no authentication method and no user name are specified. -Kerberos accepts domain user names, but not local user names. -The Service Principal Name (SPN) for the remote computer name and port does not exist. -The client and remote computers are in different domains and there is no trust between the two domains. After checking for the above issues, try the following: -Check the Event Viewer for events related to authentication. -Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or use HTTPS transport. Note that computers in the TrustedHosts list might not be authenticated. -For more information about WinRM configuration, run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic.
After I read this, I just stared at this for about 5 minutes; deer in the head lights.
I found some hope on the PowerShell Scripter’s friend, “Hey Scripting Guy” blog – http://blogs.technet.com/b/heyscriptingguy/archive/2013/11/29/remoting-week-non-domain-remoting.aspx.
Anyway, the solution from Honorary Scripting Guy, Richard Siddaway was to add the computer I am connecting to the the trusted host list. The trusted host list basically tells your computer, “Hey, you can trust this computer, go ahead and share my sensitive and private credentials with the.” So, be careful with this.
You can view the trusted host list with this PowerShell command.
Get-Item -Path WSMan:\localhost\Client\TrustedHosts
You can add a computer to the trusted list with this command.
Set-Item -Path WSMan:\localhost\Client\TrustedHosts -Value 'computerNameOfRemoteComputer' [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Y
Now, I run the configuration script and I am deer in the head lights again.
New-PSSession : Opening the remote session failed with an unexpected state. State Broken.
Such a helpful error message. Stackoverflow – http://stackoverflow.com/questions/30617304/exchange-remote-powershell-gets-sporadic-broken-state. Looks like it may be a timeout, and I’m feeling that because the script sat on “Creating Session” forever (why so long is probably the next question). I update my script to increase timeout.
$so = New-PSSessionOption -IdleTimeout 600000 $Session = New-PSSession -ComputerName $node.ComputerName -Credential $credential -SessionOption $so;
10 minute timeout is good right? So, I try again and State is still Broken. Not mission critical at the moment so I will investigate further later.
You can read more about possible solutions at the links above.
GoCD: Install Multiple Agents with Powershell, Take 2
I wrote about how to Automate Agent Install with PowerShell and thought I would provide the script I am using now since I recently had to deploy some new agents. The script is below and it is pretty self explanatory and generally follows my previous blog post and the Go.cd documentation.
We basically, copy an existing agent to a new location, remove some files that are agent specific, and create a Windows service to run the agent. Until I feel the pain of having to do it, I set the service account/password and start the service manually. Also, I configure the agent on the server manually through the Go.cd UI. When I have to install more agents I will probably automate it then.
$currentAgentPath = "D:\Go Agents\Internal\1";
$newAgentName = "Go Agent Internal 3";
$newAgentPath = "D:\Go Agents\Internal\3\";
Write-Host "Copying Files"
Copy-Item "$currentAgentPath\" -Destination $newAgentPath -Recurse;
Write-Host "Deleting Agent Specific Files"
$guidText = "$newAgentPath\config\guid.txt";
if (Test-Path $guidText)
{
Remove-Item $guidText;
}
Remove-Item "$newAgentPath\.agent-bootstrapper.running";
Write-Host "Create Agent Service"
New-Service -Name $newAgentName -Description $newAgentName -BinaryPathName "`"$newAgentPath\cruisewrapper.exe`" -s `"$newAgentPath\config\wrapper-agent.conf`"";
#$credential = Get-Credential;
#Eventually, we will write a function to set the service account and password and start the service would be nice to have a way to automatically configure the agent on the server too.
I guess I decided to do the work for you 🙂
Enjoy
PowerShell in Visual Studio, Finally, At Last… Almost
Even if you don’t like Microsoft or .Net, you have to admit that Visual Studio is a boss IDE. After being thrust into the world of scripting and PowerShell, it was disappointing to find the PowerShell support in Visual Studio to be lacking. Well, today I received a notice that Microsoft joined Adam Driscoll’s open source project, PowerShell Visual Studio Tools (PVST). They announced a release of a new version and I am ready to give it another go.
Adam makes note that Microsoft submitted a large pull request full of bug fixes and features. This project provides pretty nice PowerShell support inside my favorite IDE including:
- Edit, run and debug PowerShell scripts locally and remotely using the Visual Studio debugger
- Create projects for PowerShell scripts and modules
- Leverage Visual Studio’s locals, watch, call stack for your scripts and modules
- Use the PowerShell interactive REPL window to execute PowerShell scripts and command right from Visual Studio
- Automated Testing support using Pester
From https://visualstudiogallery.msdn.microsoft.com/c9eb3ba8-0c59-4944-9a62-6eee37294597
You can download it for free from the Visual Studio Gallery. A quick double click install of the visx file you download and your ready.
My first test was to create a PowerShell project. In the Visual Studio New Project window there’s a new project template type, PowerShell. Inside of it are two templates: PowerShell Module Project and PowerShell Script Project.
Scripting and Debugging
I start with a script project and bang out a quick Hello World script to see debugging in action.
$myName = "Charles Bryant"
$myMessage = "How you doin?"
function HelloWorld($name, $message) {
return "Hello World, my name is $name. $message"
}
HelloWorld $myName $myMessage
It feels very comfortable… like Visual Studio. I see IntelliSense, my theme works and I can see highlighting. I can set breakpoints, step in/over, see locals, watches, call stack, console output… feeling good because its doing what it said it can do and scripting PowerShell now feels a little like coding C#.
REPL Window
What about the REPL window. After a little searching, I found it tucked away on the menu: View > Other Windows > PowerShell Interactive Window. You can also get to it by Ctrl + Shift + \. I threw some quick scripts at it… ✓, it works too.
Unit Testing
Last thing I have time for is testing unit testing. First, I install Pester on the solution. Luckily there’s a NuGet package for that.
>Install-Package Pester
Then I create a simple test script file to test my Hello World script.
$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".tests.", ".")
. "$here\$sut"
Describe "HelloWorld" {
It "returns correct message" {
HelloWorld "Charles Bryant" "How you doin?" | Should Be "Hello World, my name is Charles Bryant. How you doin?"
}
}
Houston there’s a problem. When I open the Test Explorer I can see a bunch of tests that come with Pester, but I don’t see my little test. I try to reorganize the tests in the explorer and it freezes. Not sure if this is a problem with PVST, Pester, NuGet, Visual Studio, or user error… oh well. I can’t say it is a problem with PVST because I didn’t try to find out what was wrong (I still have work to do for my day job).
Conclusion
OK, unit testing isn’t as intuitive as the other operations, hence the Almost in the title. It will feel complete when I get unit testing working for me, but none the less, I like this tool a lot so far. I will definitely be watching it. If I see something up to my skills that I can contribute, I will definitely pitch in as this is something I can definitely use.
IIS 8 Configuration File
Note to self
The IIS 8 configuration file is located in %windir%\System32\inetsrv\config\applicationHost.config. It is just an XML file and the schema is well known. You can open it, edit it (if you are brave), and otherwise do configuration stuff with it. You can diff it from system to system to find inconsistencies or save it in a source code repository to standardize on a base configuration across web server nodes, if your project needs that kind of thing. Lastly, you can manage it with Powershell… you can manage it with Powershell… you can manage it with Powershell DSC!
The possibilities are endless so stop depending so much on the IIS Server Manager UI like you are in Dev preschool. You are a big boy now, remove the training wheels, but you might want to wear a helmet.
I don’t want to have this discussion again!
Blameless RCA
Let ye without failure cast the first stone.
I am involved in a workgroup at work that is exploring Root Cause Analysis in the hopes that we can come up with a way to help everyone improve their RCA process and procedures.
I believe it is important in our RCA recommendations to strive to build a culture around RCA. To borrow from a theme brought up by a workgroup member, culture building should be extended to retrospectives and all of our continuous improvement processes in general.
Just Culture
For RCA to be most effective we should instill the idea of the “blameless postmortem” into how we envision RCA. Blameless postmortem is an awesome concept that defines a culture around failure called a “Just Culture” that was introduced to me in a blog post by John Allspaw, Web Operations guru at Etsy. It’s a way to encourage team members to own their failures without fear in the hopes that a less hostile environment towards failure will encourage fast, detailed, feedback in active issue resolution and postmortems. We want team members to volunteer to report an issue as soon as they see it or cause it.
Owning Failure
In terms of RCA, this boils down to instilling the idea that finding who’s at fault, what team missed this or that, is not important. The only thing important is how, when, and why an issue was leaked and “who” is not under investigation. Granted who is at fault will most likely come out, and it should, but there should be no condemnation or negative side effect to owning a failure. We want “who” to come from failure owners themselves, not a lot of intricate detective work. We want the team to freely offer their actions that may have contributed to a failures in hopes that we can compile a timeline of multiple narratives of the failure from various perspectives. When we can freely own failure without retribution we are more apt to own up to a failure and share details that led to the failure so that it can be corrected.
Remove Managerial Blockages on RCA
There are managers that want to know who to blame so that they can monitor who is causing issues. If there is a problem with someone continuously failing, it will be evident without having to expose personal failures in the RCA process formally or as a part of team culture. Root cause is usually deeper than one person or team’s failure There are usually multiple stories that contribute to failures. There are managers that use hindsight to amplify the negative effect of failure to try to shame someone into being better. Highlighting what should have been done is not helpful as it doesn’t lead to change. Often times hindsight is disguised as a solution without ever understanding why the actions were taken that caused the failure or even how the manager’s mismanagement may have contributed to the failure. I only add this because I have seen many RCA or postmortems fail because of a manager trying to place blame and using their limited hindsight to declare the problem solved.
And More
There is a lot of good that comes from a Just Cause Culture. Since I saw some things in the RCA practices at work that may lead to the blame game, I thought that a blameless postmortem should be explicitly built into our RCA process in the hopes that it affects the culture. Just something to think about if you are going down this same road.
Orphaned Powershell PSDrive
I received this strange error while executing a script that creates a new PSDrive.
New-PSDrive : The local device name has a remembered connection to another network resource
I tried to use Remove-PSDrive, but
Remove-PSDrive : Cannot find drive. A drive with the name 'S:' does not exist.
I was able to fix this issue with the “net use” command.
First, I ran it to see if the drive was still mapped. I am still unsure how it is there between Powershell sessions, I must have missed something.
PS C:\> net use New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------ Unavailable S: \\node1\d$ Microsoft Windows Network Unavailable I: \\node2\it Microsoft Windows Network OK P: \\public Microsoft Windows Network The command completed successfully.
Then I ran “net use” with the delete parameter to remove the orphaned drive.
PS C:\> net use /delete S: S: was deleted successfully.
I love it when a plan comes together.
What is this CIM I keep running into in Powershell?
I keep having to use CIM in my scripts, but what is it? I understand how to use it, but where did it come from and what does it stand for. Like every developer I know, a search engine is the best tool to solve this mystery.
There is an industry standards organization called DMTF (Distributed Management Task Force) that defined a standard named Common Information Model. By the way, this is the same group that defined MOF (Managed Object Framework) which is the standard below the covers of DTC. CIM is defined in the MOF standard and is a cross platform common definition of management information for systems, networks, and applications and services that allows for vendor extensions. How was that for acronym soup?
Update PSModulePath for Custom PowerShell Module Development
I am in the process of a deep dive into DSC and I want to store my custom modules and DSC Resources in source control. To make it easy to run PowerShell modules you have to import them or have them on the PSModulePath environment variable. Since I don’t want to point a source repository to the default PowerShell module path, I want to add my custom module path to PSModulePath. This will save me some time when it comes to having to import modules and module changes. This means I will always be running the most recent version of my modules even the buggy ones, so if you do this, understand the implications.
It’s actually pretty easy to automate this with PowerShell. Since I already have some experience updating environment variables with PowerShell I just created a new script to add my custom module path to PSModulePath.
$currentModulePath = [Environment]::GetEnvironmentVariable("PSModulePath", "Machine")
$customModulePath = "C:\_DSC\DSCResources"
$newModulePath = $currentModulePath + ";" + $customModulePath
[Environment]::SetEnvironmentVariable("PSModulePath", $newModulePath, "Machine")
I complicated this script a bit so it is more self evident on what is happening (code as documentation – no comments necessary).
I can envision someone needing to also remove a path from PSModulePath, but this is enough to get started so I will leave it up to you, until I have a need for that :).
UPDATES
When running this script in an Invoke-Command on a remote session the modules aren’t immediately available if I tried to use modules in the new path. This is because the path is not updated on the remote session. A quick workaround for me was to remove the session and recreate it.
Get-PSSession | Remove-PSSession;
This removes all sessions so you may not want to do this. Since I don’t care about sessions I like it. This was just a one line change in my workflow script and it didn’t cause too much latency in the script execution. I know there are some other solutions that involve messing with the registry, but this is a one time deal so resetting the remote session works for me.
GoCD: Agent Running Web Driver Test Hangs After Test Failure [SOLVED]
Problem
I had a nagging issue were some tests were failing the build on our GoCD server, but the agent was not reporting the failure to the server. We are using NAnt to run NUnit tests that in turn call Web Driver to exercise web pages. There were some test failures that correctly returned a non-zero value that failed the build in NAnt. Also, the failure is captured in the log and saved in a text file. Yet, the agent didn’t report the build failure or send the artifacts to the server.
Issue
After a 2 day search for answers and a deep dive into the bowels of GoCD I discovered that a Web Driver process was kept open after the test fails the build. Specifically, the process is IEDriverServer.exe. This process was being orphaned by improper cleanup in the tests that resulted in the Web Driver and browsers staying open after the test failure.
When I ran the tests again, I watched for the failure then manually killed Web Driver and the agent magically reported to the server. I am still unsure why Web Driver would prevent the GoCD agent from reporting to the server. They are both Java processes, maybe there is something going on in the JVM or something… not sure.
Solution
My work around at the moment is to run a task killer on failure in the test script. Here is the relevant portion of the nant script that drives the tests:
<property name="nant.onfailure" value="test.taskkiller" />
<target name="test.taskkiller"> <exec program="taskkiller.bat" failonerror="false"> </exec> </target>
The taskkiller.bat is just a simple bat file that will kill Web Driver and open browsers.
taskkill /IM IEDriverServer.exe /F taskkill /IM iexplore.exe /F
Now this is just a band-aid. We will be updating our test framework to handle this. Additionally, killing all the processes like this isn’t good if we happen to be running tests in parallel on the agent, which may be a possibility in the future.
Using GitHub Behind a Proxy (Windows)
At work I am connected to the internet through a proxy. The proxy prevented me from connecting to repositories on GitHub because authentication isn’t handled properly. A co-worker recommended using the CTNLM proxy (http://cntlm.sourceforge.net/) to handle the authentication.
CNTLM works well, but he said he was having a problem with slow connections. He said he found an issue where the proxy would try multiple times to connect and would timeout and finally connect to the Git server. He noticed that it was trying to connect to local host as with ::1, like a funky empty IPv6 address. He said that adding a proxy to .gitconfig (global or systemwide config) would cause it to connect faster without having to wait for all the different connection try and failures:
[http]
proxy = http://127.0.0.1:3128
[https]
proxy = http://127.0.0.1:3128
Why does this work? I don’t have enough geek cred to know this yet, but it works and I wanted to save it here for the time I have to setup a new computer and I forget what I did.
