Cross Domain PowerShell Remoting [Fail]

I tried to run our PowerShell environment configuration scripts today and got hit with a nasty error. I double checked my credentials so I know that wasn’t the issue. The scripts worked just a month ago, but we did have some stupid security software installed on our workstations that may be adjusting how remoting works. Let’s see if I can get around it before I open a ticket and start complaining.

Here is the error. This results from a simple call to New-PSSession. The other server is in another domain, but like I said this has been working just fine.

 New-PSSession : [agpjaxd1pciapp1] Connecting to remote server agpjaxd1pciapp1 failed with the following error message : WinRM cannot process the request. The following error with errorcode 0x80090311 occurred while using Kerberos authentication: There are currently no logon servers available to service the logon request.
 Possible causes are:
  -The user name or password specified are invalid.
   -Kerberos is used when no authentication method and no user name are specified.
   -Kerberos accepts domain user names, but not local user names.
   -The Service Principal Name (SPN) for the remote computer name and port does not exist.
   -The client and remote computers are in different domains and there is no trust between the two domains.
  After checking for the above issues, try the following:
   -Check the Event Viewer for events related to authentication.
   -Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or use HTTPS transport.
  Note that computers in the TrustedHosts list might not be authenticated.
    -For more information about WinRM configuration, run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic.

After I read this, I just stared at this for about 5 minutes; deer in the head lights.

I found some hope on the PowerShell Scripter’s friend, “Hey Scripting Guy” blog – http://blogs.technet.com/b/heyscriptingguy/archive/2013/11/29/remoting-week-non-domain-remoting.aspx.

Anyway, the solution from Honorary Scripting Guy, Richard Siddaway was to add the computer I am connecting to the the trusted host list. The trusted host list basically tells your computer, “Hey, you can trust this computer, go ahead and share my sensitive and private credentials with the.” So, be careful with this.

You can view the trusted host list with this PowerShell command.

Get-Item -Path WSMan:\localhost\Client\TrustedHosts

You can add a computer to the trusted list with this command.

Set-Item -Path WSMan:\localhost\Client\TrustedHosts -Value 'computerNameOfRemoteComputer'
[Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"): Y

Now, I run the configuration script and I am deer in the head lights again.

New-PSSession : Opening the remote session failed with an unexpected state. State Broken.

Such a helpful error message. Stackoverflow – http://stackoverflow.com/questions/30617304/exchange-remote-powershell-gets-sporadic-broken-state. Looks like it may be a timeout, and I’m feeling that because the script sat on “Creating Session” forever (why so long is probably the next question). I update my script to increase timeout.

$so = New-PSSessionOption -IdleTimeout 600000
$Session = New-PSSession -ComputerName $node.ComputerName -Credential $credential -SessionOption $so;

10 minute timeout is good right? So, I try again and State is still Broken. Not mission critical at the moment so I will investigate further later.

You can read more about possible solutions at the links above.

Multitenant Thoughts

I am building my 3rd multitenant SAAS solution. I am not referencing any of my earlier work because I think they were way more work than they should have been. Also, I have since moved on from the whole ASP.net web forms development mindset and I want to start with a fresh perspective instead of trying to improve my big balls of spaghetti code.

Today, my thoughts center around enforcing the inclusion and processing of a tenant ID in every command and query. My tenant model keeps all tenant data in a shared database and tables. To keep everything segregated every time I write data and read data there has to be a tenant ID included so that we don’t mess with the wrong tenants data.

I have seen all kinds of solutions for this, some more complicating than I care to tackle at this moment. I am currently leaning towards enforcing it in the data repository.

I am using a generic repository for CRUD operations and an event repository for async event driven workflows. In the repository API’s I want to introduce a validated parameter for tenant ID in every write and read operation. This will force all clients to provide the ID when they call the repos.

I just have to update a couple classes in the repos to enforce inclusion of the tenant ID when I write data. Also, every read will use the tenant ID to scope the result set to a specific tenant’s data. I already have a proof of concept for this app so this change will cause a breaking change in my existing clients, but still not a lot of work considering the fact that I almost decided to enforce the tenant ID in a layer higher than the repo, which would have been a maintenance nightmare.

Is this best practice? No. I don’t think there is a best practice besides the fact that you should use a tenant ID to segregate tenant data in a shared data store. This solution works for my problem and I am able to maintain it in just a couple classes. If the problem changes I can look into the fancy solutions I read about.

Now, how will I resolve the tenant ID? Sub-folder, sub-domain, query string, custom domain…?

CQRS is so easy.

This is just a quick post as I sit in the hospital bored to death and recovering.

I have been intrigued by CQRS, command and query responsibility segregation, ever since I heard about it in a talk by Greg Young at the first Code on the Beach conference. I decided to surf the blogosphere to see what people are doing these days with CQRS. It seems like there is still quite a bit of confusion about what it is.

I have to admit, at first it was extremely intimidating to me too. Not because CQRS is hard, but like many people, I blurred the lines between CQRS, DDD, and event sourcing. When you look at CQRS in the context of everything it is not, it tends to look more complicating than it really is.

I am going to borrow a trick that Greg uses and show a code example of the simplicity of CQRS. Let’s say we have a service that provides an API to manage a product catalog:

ProductManager

void CreateProduct(Product product)
Product GetProduct(string productId)
bool IsProductInStock(string productId)
void UpdateProductPrice(string productId, decimal price)
void RemoveProduct(string productId)
List<Product> GetProductRecommendations(string productId)

If we apply CQRS to this service, we would simply wind up with two services. One that we can optimize for write operations (commands) and another that is optimized for read operations (queries).

ProductQueries

Product GetProduct(string productId)
bool IsProductInStock(string productId)
List<Product> GetProductRecommendations(string productId)
ProductCommands

void CreateProduct(Product product)
void UpdateProductPrice(string productId, decimal price)
void RemoveProduct(string productId)

Easy, peazy… With the read and write operations segregated into their own APIs you are free to do all sorts of fancy optimizations and are on better footing to explore DDD and event sourcing.

Conclusion

The point is CQRS is just an extension of the simple CQS pattern that moves the concept a step further from the method to the class or API level. Nothing more, nothing less. I believe most applications can benefit from CQRS even if you aren’t going to do DDD or event sourcing. So, read all about CQRS, but if you are studying more than simple separation of read and write operations you have read too far and getting into other concepts.

GoCD: Install Multiple Agents with Powershell, Take 2

I wrote about how to Automate Agent Install with PowerShell and thought I would provide the script I am using now since I recently had to deploy some new agents. The script is below and it is pretty self explanatory and generally follows my previous blog post and the Go.cd documentation.

We basically, copy an existing agent to a new location, remove some files that are agent specific, and create a Windows service to run the agent. Until I feel the pain of having to do it, I set the service account/password and start the service manually. Also, I configure the agent on the server manually through the Go.cd UI. When I have to install more agents I will probably automate it then.

$currentAgentPath = "D:\Go Agents\Internal\1";
$newAgentName = "Go Agent Internal 3";
$newAgentPath = "D:\Go Agents\Internal\3\";

Write-Host "Copying Files"
Copy-Item "$currentAgentPath\" -Destination $newAgentPath -Recurse;

Write-Host "Deleting Agent Specific Files"
$guidText = "$newAgentPath\config\guid.txt";

if (Test-Path $guidText)
{
 Remove-Item $guidText;
}

Remove-Item "$newAgentPath\.agent-bootstrapper.running";

Write-Host "Create Agent Service"
New-Service -Name $newAgentName -Description $newAgentName -BinaryPathName "`"$newAgentPath\cruisewrapper.exe`" -s `"$newAgentPath\config\wrapper-agent.conf`"";

#$credential = Get-Credential;
#Eventually, we will write a function to set the service account and password and start the service would be nice to have a way to automatically configure the agent on the server too.

I guess I decided to do the work for you 🙂

Enjoy

Quick Testing Legacy ASP.net Web Services

If you still have legacy ASP.net webservices, the old asmx file variety, and you need to do a quick test from a server that doesn’t have fancy testing tools. This article provided an easy way to test the service with just a browser and an HTML file.

Test Service GET Method

To test the service’s GET methods you can use a browser and a specially formatted URL.

http://domain/service.asmx/method?parameter=value

For example, I have

  • a domain, legacywebservices.com
  • it hosts a service, oldservice.asmx
  • that has a GET method, GetOldData
  • that accepts parameters, ID and Name

The URL to test this web service method would be

http://legacywebservices.com/oldservice.asmx/GetOldData?ID=1000&Name=Some Old Data

This would return an XML file containing the response from the service or an error to troubleshoot.

Test Service POST Method

To test the service’s POST methods you can use a simple HTML file containing a form. Just open the form in your browser, enter the values, and submit.

<form method="POST" action="http://domain/service.asmx/method"><div><input type="text" name="parameter" /></div><div><input type="submit" value="method" /></div></form>

For example, I have

  • a domain, legacywebservices.com
  • it hosts a service, oldservice.asmx
  • that has a Post method, SaveOldData
  • that accepts parameters, ID and Name

The HTML form to test this web service method would be

<form method=”POST” action=”http://legacywebservices.com/oldservice.asmx/SaveOldData”><div>ID: <input type=”text” name=”ID” /></div><div>Name: <input type=”text” name=”Name” /></div><div><input type=”submit” value=”SaveOldData”></div></form>

This would return an XML file containing the response from the service or an error to troubleshoot.

Troubleshoot

If you get a System.Net.WebException error message that indicates the request format is unrecognized, you need to do some configuration to get it to work as explained in this KB. Just add this to the system.web node in the web.config of the web service and you should be good to go.

<webServices> <protocols> <add name="HttpGet"/> <add name="HttpPost"/> </protocols> </webServices>

Conclusion

If you are sentenced to maintaining and testing legacy ASP.net web services, these simple tests can help uncover pesky connectivity, data and other issues that don’t return proper exceptions or errors because your app is old and dumb (even if you wrote it).

PowerShell in Visual Studio, Finally, At Last… Almost

Even if you don’t like Microsoft or .Net, you have to admit that Visual Studio is a boss IDE. After being thrust into the world of scripting and PowerShell, it was disappointing to find the PowerShell support in Visual Studio to be lacking. Well, today I received a notice that Microsoft joined Adam Driscoll’s open source project, PowerShell Visual Studio Tools (PVST). They announced a release of a new version and I am ready to give it another go.

Adam makes note that Microsoft submitted a large pull request full of bug fixes and features. This project provides pretty nice PowerShell support inside my favorite IDE including:

  • Edit, run and debug PowerShell scripts locally and remotely using the Visual Studio debugger
  • Create projects for PowerShell scripts and modules
  • Leverage Visual Studio’s locals, watch, call stack for your scripts and modules
  • Use the PowerShell interactive REPL window to execute PowerShell scripts and command right from Visual Studio
  • Automated Testing support using Pester
    From https://visualstudiogallery.msdn.microsoft.com/c9eb3ba8-0c59-4944-9a62-6eee37294597

You can download it for free from the Visual Studio Gallery. A quick double click install of the visx file you download and your ready.

My first test was to create a PowerShell project. In the Visual Studio New Project window there’s a new project template type, PowerShell. Inside of it are two templates: PowerShell Module Project and PowerShell Script Project.

Scripting and Debugging

I start with a script project and bang out a quick Hello World script to see debugging in action.

$myName = "Charles Bryant"
$myMessage = "How you doin?"

function HelloWorld($name, $message) {
  return "Hello World, my name is $name. $message"
}

HelloWorld $myName $myMessage

It feels very comfortable… like Visual Studio. I see IntelliSense, my theme works and I can see highlighting. I can set breakpoints, step in/over, see locals, watches, call stack, console output… feeling good because its doing what it said it can do and scripting PowerShell now feels a little like coding C#.

REPL Window

What about the REPL window. After a little searching, I found it tucked away on the menu: View > Other Windows > PowerShell Interactive Window. You can also get to it by Ctrl + Shift + \. I threw some quick scripts at it… ✓, it works too.

Unit Testing

Last thing I have time for is testing unit testing. First, I install Pester on the solution. Luckily there’s a NuGet package for that.

>Install-Package Pester

Then I create a simple test script file to test my Hello World script.

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".tests.", ".")
. "$here\$sut"

Describe "HelloWorld" {
 It "returns correct message" {
   HelloWorld "Charles Bryant" "How you doin?" | Should Be "Hello World, my name is Charles Bryant. How you doin?"
 }
}

Houston there’s a problem. When I open the Test Explorer I can see a bunch of tests that come with Pester, but I don’t see my little test. I try to reorganize the tests in the explorer and it freezes. Not sure if this is a problem with PVST, Pester, NuGet, Visual Studio, or user error… oh well. I can’t say it is a problem with PVST because I didn’t try to find out what was wrong (I still have work to do for my day job).

Conclusion

OK, unit testing isn’t as intuitive as the other operations, hence the Almost in the title. It will feel complete when I get unit testing working for me, but none the less, I like this tool a lot so far. I will definitely be watching it. If I see something up to my skills that I can contribute, I will definitely pitch in as this is something I can definitely use.

Stupid Dev Trick #1

Someone on my team needed to create a ton of SQL insert statements to create some test data. We actually have a tool that does this, but he needed to pull data from a complex query with multiple joins and other nasty stuff. The insert statement that he needed was only 3 columns, but he had to insert a lot of data and didn’t want to write the insert scripts manually.

Stupid Dev Trick #1 to the Rescue

I showed him the super special column combinitorial power of a spread sheet copy and paste. With this he was able to create the insert statements in less than the time it took to write the query to get the data.

Here is the gist of the trick. Open a spread sheet, in the first column write the first half of the insert statement you want to produce, up to the point that you need to add your data:

| INSERT INTO Table1 (Column1, Column2, Column3) VALUES ( |

Then run your query and copy the first column of data into Column 2.

| Datavalue 1 |
| Datavalue 2 |
| Datavalue 3 |
| remaining...|

Next add a comma in Column 3 next to each data value (use copy and paste), then do the same for the remaining values from your query results. Finally in the last columns close the insert statement.

| INSERT INTO Table1 (Column1... | Datavalue 1 | ,        | Other value 1 | )        |
| INSERT INTO Table1 (Column1... | Datavalue 2 | ,        | Other value 2 | )        |
| INSERT INTO Table1 (Column1... | Datavalue 3 | ,        | Other value 3 | )        |
| INSERT INTO Table1 (Column1... | remaining...| ,        | remaining.... | )        |

Next copy the insert statement you wrote in column 1 to each row. Then you can just copy all of the columns to a text editor or SQL Management Studio in my case and the columns will be magically turned into white space and ready for running.

INSERT INTO Table1 (Column1, Column2, Column3) VALUES ( Datavalue 1, Other value 1 )
INSERT INTO Table1 (Column1, Column2, Column3) VALUES ( Datavalue 2, Other value 2 )
...

Conclusion

I have used this stupid trick and variations thereof to save a ton of time to produce one-off solutions. Yes, it would be better to create a script or update our test data extraction tool to handle this, but with the pressure of time and not wanting to solve edge cases, this stupid trick is actually smart when you can’t squeeze another brain cell for a quick idea and you have nothing else up your sleeve.

Everyone’s a Risk Analyst

I watched a video about software security and it had me thinking about risk. So, I thought I would write a quick blog post about some of my thoughts. This is a personal opinion post and rant about team responsibility in revealing risk.

Revealing Risks

As a member of a software delivery team one of my many responsibilities is to reveal risk in the application before its released. I wasn’t asked to specifically reveal risk. Actually, as part of my current position I was asked to write automated tests that prove the application works as specified by the business. Well, I do that, but the business really wants know the risk in shipping a release. If we ship, will it work, will there be profit sucking bugs, reputation destroying issues…? Can we trust that we can push the big red deploy button without the release blowing up and hurting instead of helping the business and our customers?

I understand that I can not reveal all risk, but I try to reveal the most damaging risks. There is no way we can uncover all risks, we can never be 100% certain that we are risk free, but there is value in every team member searching for risk, revealing risks, verifying the most damaging risks are mitigated, and providing information to the team to evaluate the risk of a release.

Even though I write automated tests, I cannot in good faith rely solely upon automated tests to reveal risks. In order to write automated test I have to know how to run scenarios manually. If I am going to run scenarios manually, I should also explore the application for risks outside of the specification checks I am automating. My true value to the business is realized when I am able to explore the application and observe its behavior with my own eyes in order to identify risks not covered by requirements and specifications. Automation is good, but it can only catch known risks. So, I manually test the application and manual testing will never go away no matter how much automated coverage I achieve.

Developers

If I am working as a feature developer, I have the most responsibility for catching risks. If my code doesn’t work, I have to fix it. So, why not invest time to make sure the code works before I send it down the pipeline. If I have to wait for feedback from someone else down the delivery pipeline, it becomes harder to switch context and remember the details of the change. Testing as I code gives me the fastest feedback and the least amount of context switching.

Also, its cheaper to fix an issue as I am developing it than later after someone else has spent time testing it. So, having testing as part of my development workflow reduces cost. I further reduce the cost of software delivery by automating the tests that I use to check my work. Then it becomes faster to rerun the tests and the tests can be leveraged in an automated build to check for regressions.

When I believe I am ready to commit, after I have manually explored the effect of my change and have identified, triaged, and mitigated high priority risks, after I have automated my specification checks, then I can commit my work. When I commit I am saying that I have exercised due diligence in revealing risks in the code I deliver.

Business Analyst

As a Business Analyst, I can prevent risks by not introducing them in specifications. I can reduce risks by involving users, developers, QA, and business stakeholders early in my analysis of changes to the application. Even though I provide awesome specifications, I am engaged during development and available for testing at all points in the SDLC. My specifications are only as good as I understand the application and understanding comes from usage and interrogation of others that understand the application and how it will be used. If I am going to explore the application to help write specifications, I am going to explore the application outside of the specifications to help my team uncover hidden risks.

Quality Analyst

As a Quality Analyst, I am the last line of defense before a release is given to our users. Even though quality is in my title, I am not solely responsible for quality since quality is a large component of the risk analysis my team does. I reduce risks with my talent for exposing quality related risks. If I didn’t have to deal with shortcomings in specifications and the application’s development, I could take more time to freely explore the system and uncover risks. Many times my testing amounts to activities that could have been automated like specification checks and regression tests. I am the professional risk analyst on the team, but because quality is in my title, on many teams I have been reduced to a human checker instead of a professional risk analyst.

Automation Engineer

I am a developer who recently switched my title to automation engineer. I am really a QA that writes a lot of code. I have always had an extremely high regard for QA. As an independent contractor, on many of my contracts I didn’t have the luxury of a QA and it hurt. When I first worked with a good QA, my development world changed. I learned how to test my work just by watching them and reading their bug reports. I’d say that my time spent with a couple of these world class QAs was worth more than anything I have learned from other developers. Now that I see first hand a little of what they do, I have even more respect.

When I identify new risks, it is up to the product team to categorize it as a true risk, bug, defect…or as something we can ignore. My value to the team is not identifying bugs or rejecting tickets, but providing information on the risks I have identified in the application. If I have done my work, I would also provide supporting evidence that helps others to observe the risks I identified.

If you hold QA accountable for bugs they did not add to the system, you don’t understand the role of QA. If bugs escape to production, it is not QAs fault, it is the teams fault. You can’t place blame on one person or role. Everyone, Dev, BA, QA, Product Manager…etc should be included in the pre-release hunt for bugs. If a bug gets by the team it is the team’s fault.

Conclusion

I have come to the conclusion that I need to reveal risk from my experience as an entrepreneur and developer. I strengthened my belief in this idea from studying “Titans of Test” like James Whittaker, Michael Bolton, Cem Kaner, and James Bach. If you are on a software delivery team, you are a risk analyst. In agile teams everyone is considered a developer. The titles are gone and the team shares in all responsibilities. There may be people that specialize in certain activities like writing code, specs or tests, but everyone should be involved in all aspects of delivering the product. There may be people on your team with test or quality in their title or job description, but everyone on the team is responsible for the risk and therefore quality of the applications. So, if you are involved in software delivery, get in touch with your inner tester and explore your application because quality is a team sport and you are a risk analyst.

Mars Drone

I am proud to announce that I will be finally launching my Mars Drone today. I have been building this project for 5 years. The drone is very small like a drone you would buy in a local store. It is heavily shielded and runs on a proprietary propulsion and energy system. The drone finally passed the final test of its guidance and communications system last week when it returned successfully from a lunar orbit. If all goes well, I will be able to publish pictures of Mars in about nine months.

Here’s to dreaming!

Rethrowing More Expressive Exceptions in C#

This post was contributed by Jonathan Hamm, one of the developer gurus I have the privilege of working with at my day job.

I did not realize this behavior with rethrowing exceptions existed where information can be added to the exception by the “catch” then in subsequent “catch” blocks the data will remain. It does make sense now with this test.

The inner most method throws an error, the next level adds an element to the Data property then rethrown, then the main level catches the exception that has the additional Data property.

LINQPad>

void Main()
{
      try
      {            
            LogAndRethrow();
      }
      catch (Exception ex)
      {
            ex.Data.Dump();
      }
}

void LogAndRethrow()
{
      try
      {
            CreateException();
      }
      catch (Exception ex)
      {
            ex.Data.Add("caught and rethrown", true);
            throw;
      }
}

void CreateException()
{
      throw new NotImplementedException();
}

Jon used LINQPad to explore this feature and run the code above. Actually, he does a lot of amazing things with LINQPad you should definitely give this tool a try if you haven’t already. Speaking of LINQPad, did you know you can run LINQPad scripts from the command line with lprun? Something to think about if you are looking to use your C# skills for continuous delivery automation.