Category: Pipeline
Quick Testing Legacy ASP.net Web Services
If you still have legacy ASP.net webservices, the old asmx file variety, and you need to do a quick test from a server that doesn’t have fancy testing tools. This article provided an easy way to test the service with just a browser and an HTML file.
Test Service GET Method
To test the service’s GET methods you can use a browser and a specially formatted URL.
http://domain/service.asmx/method?parameter=value
For example, I have
- a domain, legacywebservices.com
- it hosts a service, oldservice.asmx
- that has a GET method, GetOldData
- that accepts parameters, ID and Name
The URL to test this web service method would be
http://legacywebservices.com/oldservice.asmx/GetOldData?ID=1000&Name=Some Old Data
This would return an XML file containing the response from the service or an error to troubleshoot.
Test Service POST Method
To test the service’s POST methods you can use a simple HTML file containing a form. Just open the form in your browser, enter the values, and submit.
<form method="POST" action="http://domain/service.asmx/method"><div><input type="text" name="parameter" /></div><div><input type="submit" value="method" /></div></form>
For example, I have
- a domain, legacywebservices.com
- it hosts a service, oldservice.asmx
- that has a Post method, SaveOldData
- that accepts parameters, ID and Name
The HTML form to test this web service method would be
This would return an XML file containing the response from the service or an error to troubleshoot.
Troubleshoot
If you get a System.Net.WebException error message that indicates the request format is unrecognized, you need to do some configuration to get it to work as explained in this KB. Just add this to the system.web node in the web.config of the web service and you should be good to go.
<webServices> <protocols> <add name="HttpGet"/> <add name="HttpPost"/> </protocols> </webServices>
Conclusion
If you are sentenced to maintaining and testing legacy ASP.net web services, these simple tests can help uncover pesky connectivity, data and other issues that don’t return proper exceptions or errors because your app is old and dumb (even if you wrote it).
PowerShell in Visual Studio, Finally, At Last… Almost
Even if you don’t like Microsoft or .Net, you have to admit that Visual Studio is a boss IDE. After being thrust into the world of scripting and PowerShell, it was disappointing to find the PowerShell support in Visual Studio to be lacking. Well, today I received a notice that Microsoft joined Adam Driscoll’s open source project, PowerShell Visual Studio Tools (PVST). They announced a release of a new version and I am ready to give it another go.
Adam makes note that Microsoft submitted a large pull request full of bug fixes and features. This project provides pretty nice PowerShell support inside my favorite IDE including:
- Edit, run and debug PowerShell scripts locally and remotely using the Visual Studio debugger
- Create projects for PowerShell scripts and modules
- Leverage Visual Studio’s locals, watch, call stack for your scripts and modules
- Use the PowerShell interactive REPL window to execute PowerShell scripts and command right from Visual Studio
- Automated Testing support using Pester
From https://visualstudiogallery.msdn.microsoft.com/c9eb3ba8-0c59-4944-9a62-6eee37294597
You can download it for free from the Visual Studio Gallery. A quick double click install of the visx file you download and your ready.
My first test was to create a PowerShell project. In the Visual Studio New Project window there’s a new project template type, PowerShell. Inside of it are two templates: PowerShell Module Project and PowerShell Script Project.
Scripting and Debugging
I start with a script project and bang out a quick Hello World script to see debugging in action.
$myName = "Charles Bryant"
$myMessage = "How you doin?"
function HelloWorld($name, $message) {
return "Hello World, my name is $name. $message"
}
HelloWorld $myName $myMessage
It feels very comfortable… like Visual Studio. I see IntelliSense, my theme works and I can see highlighting. I can set breakpoints, step in/over, see locals, watches, call stack, console output… feeling good because its doing what it said it can do and scripting PowerShell now feels a little like coding C#.
REPL Window
What about the REPL window. After a little searching, I found it tucked away on the menu: View > Other Windows > PowerShell Interactive Window. You can also get to it by Ctrl + Shift + \. I threw some quick scripts at it… ✓, it works too.
Unit Testing
Last thing I have time for is testing unit testing. First, I install Pester on the solution. Luckily there’s a NuGet package for that.
>Install-Package Pester
Then I create a simple test script file to test my Hello World script.
$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".tests.", ".")
. "$here\$sut"
Describe "HelloWorld" {
It "returns correct message" {
HelloWorld "Charles Bryant" "How you doin?" | Should Be "Hello World, my name is Charles Bryant. How you doin?"
}
}
Houston there’s a problem. When I open the Test Explorer I can see a bunch of tests that come with Pester, but I don’t see my little test. I try to reorganize the tests in the explorer and it freezes. Not sure if this is a problem with PVST, Pester, NuGet, Visual Studio, or user error… oh well. I can’t say it is a problem with PVST because I didn’t try to find out what was wrong (I still have work to do for my day job).
Conclusion
OK, unit testing isn’t as intuitive as the other operations, hence the Almost in the title. It will feel complete when I get unit testing working for me, but none the less, I like this tool a lot so far. I will definitely be watching it. If I see something up to my skills that I can contribute, I will definitely pitch in as this is something I can definitely use.
Everyone’s a Risk Analyst
I watched a video about software security and it had me thinking about risk. So, I thought I would write a quick blog post about some of my thoughts. This is a personal opinion post and rant about team responsibility in revealing risk.
Revealing Risks
As a member of a software delivery team one of my many responsibilities is to reveal risk in the application before its released. I wasn’t asked to specifically reveal risk. Actually, as part of my current position I was asked to write automated tests that prove the application works as specified by the business. Well, I do that, but the business really wants know the risk in shipping a release. If we ship, will it work, will there be profit sucking bugs, reputation destroying issues…? Can we trust that we can push the big red deploy button without the release blowing up and hurting instead of helping the business and our customers?
I understand that I can not reveal all risk, but I try to reveal the most damaging risks. There is no way we can uncover all risks, we can never be 100% certain that we are risk free, but there is value in every team member searching for risk, revealing risks, verifying the most damaging risks are mitigated, and providing information to the team to evaluate the risk of a release.
Even though I write automated tests, I cannot in good faith rely solely upon automated tests to reveal risks. In order to write automated test I have to know how to run scenarios manually. If I am going to run scenarios manually, I should also explore the application for risks outside of the specification checks I am automating. My true value to the business is realized when I am able to explore the application and observe its behavior with my own eyes in order to identify risks not covered by requirements and specifications. Automation is good, but it can only catch known risks. So, I manually test the application and manual testing will never go away no matter how much automated coverage I achieve.
Developers
If I am working as a feature developer, I have the most responsibility for catching risks. If my code doesn’t work, I have to fix it. So, why not invest time to make sure the code works before I send it down the pipeline. If I have to wait for feedback from someone else down the delivery pipeline, it becomes harder to switch context and remember the details of the change. Testing as I code gives me the fastest feedback and the least amount of context switching.
Also, its cheaper to fix an issue as I am developing it than later after someone else has spent time testing it. So, having testing as part of my development workflow reduces cost. I further reduce the cost of software delivery by automating the tests that I use to check my work. Then it becomes faster to rerun the tests and the tests can be leveraged in an automated build to check for regressions.
When I believe I am ready to commit, after I have manually explored the effect of my change and have identified, triaged, and mitigated high priority risks, after I have automated my specification checks, then I can commit my work. When I commit I am saying that I have exercised due diligence in revealing risks in the code I deliver.
Business Analyst
As a Business Analyst, I can prevent risks by not introducing them in specifications. I can reduce risks by involving users, developers, QA, and business stakeholders early in my analysis of changes to the application. Even though I provide awesome specifications, I am engaged during development and available for testing at all points in the SDLC. My specifications are only as good as I understand the application and understanding comes from usage and interrogation of others that understand the application and how it will be used. If I am going to explore the application to help write specifications, I am going to explore the application outside of the specifications to help my team uncover hidden risks.
Quality Analyst
As a Quality Analyst, I am the last line of defense before a release is given to our users. Even though quality is in my title, I am not solely responsible for quality since quality is a large component of the risk analysis my team does. I reduce risks with my talent for exposing quality related risks. If I didn’t have to deal with shortcomings in specifications and the application’s development, I could take more time to freely explore the system and uncover risks. Many times my testing amounts to activities that could have been automated like specification checks and regression tests. I am the professional risk analyst on the team, but because quality is in my title, on many teams I have been reduced to a human checker instead of a professional risk analyst.
Automation Engineer
I am a developer who recently switched my title to automation engineer. I am really a QA that writes a lot of code. I have always had an extremely high regard for QA. As an independent contractor, on many of my contracts I didn’t have the luxury of a QA and it hurt. When I first worked with a good QA, my development world changed. I learned how to test my work just by watching them and reading their bug reports. I’d say that my time spent with a couple of these world class QAs was worth more than anything I have learned from other developers. Now that I see first hand a little of what they do, I have even more respect.
When I identify new risks, it is up to the product team to categorize it as a true risk, bug, defect…or as something we can ignore. My value to the team is not identifying bugs or rejecting tickets, but providing information on the risks I have identified in the application. If I have done my work, I would also provide supporting evidence that helps others to observe the risks I identified.
If you hold QA accountable for bugs they did not add to the system, you don’t understand the role of QA. If bugs escape to production, it is not QAs fault, it is the teams fault. You can’t place blame on one person or role. Everyone, Dev, BA, QA, Product Manager…etc should be included in the pre-release hunt for bugs. If a bug gets by the team it is the team’s fault.
Conclusion
I have come to the conclusion that I need to reveal risk from my experience as an entrepreneur and developer. I strengthened my belief in this idea from studying “Titans of Test” like James Whittaker, Michael Bolton, Cem Kaner, and James Bach. If you are on a software delivery team, you are a risk analyst. In agile teams everyone is considered a developer. The titles are gone and the team shares in all responsibilities. There may be people that specialize in certain activities like writing code, specs or tests, but everyone should be involved in all aspects of delivering the product. There may be people on your team with test or quality in their title or job description, but everyone on the team is responsible for the risk and therefore quality of the applications. So, if you are involved in software delivery, get in touch with your inner tester and explore your application because quality is a team sport and you are a risk analyst.
Rethrowing More Expressive Exceptions in C#
This post was contributed by Jonathan Hamm, one of the developer gurus I have the privilege of working with at my day job.
I did not realize this behavior with rethrowing exceptions existed where information can be added to the exception by the “catch” then in subsequent “catch” blocks the data will remain. It does make sense now with this test.
The inner most method throws an error, the next level adds an element to the Data property then rethrown, then the main level catches the exception that has the additional Data property.
LINQPad>
void Main()
{
try
{
LogAndRethrow();
}
catch (Exception ex)
{
ex.Data.Dump();
}
}
void LogAndRethrow()
{
try
{
CreateException();
}
catch (Exception ex)
{
ex.Data.Add("caught and rethrown", true);
throw;
}
}
void CreateException()
{
throw new NotImplementedException();
}
Jon used LINQPad to explore this feature and run the code above. Actually, he does a lot of amazing things with LINQPad you should definitely give this tool a try if you haven’t already. Speaking of LINQPad, did you know you can run LINQPad scripts from the command line with lprun? Something to think about if you are looking to use your C# skills for continuous delivery automation.
Ctrl-S Rapid Feedback Loop
Kent Beck, inventor of Extreme Programming and TDD guru, did a short video on how he went about learning CoffeeScript. The beauty of what he described didn’t have much to do with what he learned, but how. He based the video off of his “Making Making Manefesto” and some example of Making Making that inspired him.
What he did was create a quick little test framework that gave him instant feedback on the quality of his code every time he saved his code (Ctrl-S on Windows). This in effect gave him feedback on not only the code he was writing, but his making making thought process while learning CoffeeScript all at the same time.
I have seen rapid test feedback with MightyMoose for .Net, but that is slow in comparison to what he was able to achieve. It helps that JavaScript, even with CoffeeScript in the middle, doesn’t have a heavy compilation step as it is an interpreted language. I have also seen the benefits of file watchers when working with SASS and LESS for CSS to speed up feedback loops in UI development. I have played with rapid feedback with HTML changes in Chrome Developer tools (very fast). Yet, the context of using it to learn a new language never dawned on me. I have used numerous scripting sites, like CodeAcademy, to learn the basics of Perl, Ruby and others by following a set guide to learning the language. I have never seen it done like this with such ease, expressiveness, and ability to experiment and wander while maintaining a constant sense that you are on the right path.
Anyway, with my intense, somewhat obsessive, focus on improving feedback loops in software delivery, this was a great example of how automation can help increase efficiency. I wish we could do this in Visual Studio with similar speed.
- A test window to write tests for new code or code changes I want to write.
- A code window to write the new code or code changes.
- A result window to view instant results of the tests after saving either the test or code window.
Does a solution with similar speed as Kent’s example, but for C#, reside somewhere in Roslyn, maybe. It’s possible that MightMoose is the answer and is faster than when I first tried it years back. Will I find time to explore it, probably not, but I would really like to.
Making Making Coffee
.Net Continuous Test
Chrome Developer Tools Live Editing
IIS 8 Configuration File
Note to self
The IIS 8 configuration file is located in %windir%\System32\inetsrv\config\applicationHost.config. It is just an XML file and the schema is well known. You can open it, edit it (if you are brave), and otherwise do configuration stuff with it. You can diff it from system to system to find inconsistencies or save it in a source code repository to standardize on a base configuration across web server nodes, if your project needs that kind of thing. Lastly, you can manage it with Powershell… you can manage it with Powershell… you can manage it with Powershell DSC!
The possibilities are endless so stop depending so much on the IIS Server Manager UI like you are in Dev preschool. You are a big boy now, remove the training wheels, but you might want to wear a helmet.
I don’t want to have this discussion again!
Blameless RCA
Let ye without failure cast the first stone.
I am involved in a workgroup at work that is exploring Root Cause Analysis in the hopes that we can come up with a way to help everyone improve their RCA process and procedures.
I believe it is important in our RCA recommendations to strive to build a culture around RCA. To borrow from a theme brought up by a workgroup member, culture building should be extended to retrospectives and all of our continuous improvement processes in general.
Just Culture
For RCA to be most effective we should instill the idea of the “blameless postmortem” into how we envision RCA. Blameless postmortem is an awesome concept that defines a culture around failure called a “Just Culture” that was introduced to me in a blog post by John Allspaw, Web Operations guru at Etsy. It’s a way to encourage team members to own their failures without fear in the hopes that a less hostile environment towards failure will encourage fast, detailed, feedback in active issue resolution and postmortems. We want team members to volunteer to report an issue as soon as they see it or cause it.
Owning Failure
In terms of RCA, this boils down to instilling the idea that finding who’s at fault, what team missed this or that, is not important. The only thing important is how, when, and why an issue was leaked and “who” is not under investigation. Granted who is at fault will most likely come out, and it should, but there should be no condemnation or negative side effect to owning a failure. We want “who” to come from failure owners themselves, not a lot of intricate detective work. We want the team to freely offer their actions that may have contributed to a failures in hopes that we can compile a timeline of multiple narratives of the failure from various perspectives. When we can freely own failure without retribution we are more apt to own up to a failure and share details that led to the failure so that it can be corrected.
Remove Managerial Blockages on RCA
There are managers that want to know who to blame so that they can monitor who is causing issues. If there is a problem with someone continuously failing, it will be evident without having to expose personal failures in the RCA process formally or as a part of team culture. Root cause is usually deeper than one person or team’s failure There are usually multiple stories that contribute to failures. There are managers that use hindsight to amplify the negative effect of failure to try to shame someone into being better. Highlighting what should have been done is not helpful as it doesn’t lead to change. Often times hindsight is disguised as a solution without ever understanding why the actions were taken that caused the failure or even how the manager’s mismanagement may have contributed to the failure. I only add this because I have seen many RCA or postmortems fail because of a manager trying to place blame and using their limited hindsight to declare the problem solved.
And More
There is a lot of good that comes from a Just Cause Culture. Since I saw some things in the RCA practices at work that may lead to the blame game, I thought that a blameless postmortem should be explicitly built into our RCA process in the hopes that it affects the culture. Just something to think about if you are going down this same road.
Orphaned Powershell PSDrive
I received this strange error while executing a script that creates a new PSDrive.
New-PSDrive : The local device name has a remembered connection to another network resource
I tried to use Remove-PSDrive, but
Remove-PSDrive : Cannot find drive. A drive with the name 'S:' does not exist.
I was able to fix this issue with the “net use” command.
First, I ran it to see if the drive was still mapped. I am still unsure how it is there between Powershell sessions, I must have missed something.
PS C:\> net use New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------ Unavailable S: \\node1\d$ Microsoft Windows Network Unavailable I: \\node2\it Microsoft Windows Network OK P: \\public Microsoft Windows Network The command completed successfully.
Then I ran “net use” with the delete parameter to remove the orphaned drive.
PS C:\> net use /delete S: S: was deleted successfully.
I love it when a plan comes together.
What is this CIM I keep running into in Powershell?
I keep having to use CIM in my scripts, but what is it? I understand how to use it, but where did it come from and what does it stand for. Like every developer I know, a search engine is the best tool to solve this mystery.
There is an industry standards organization called DMTF (Distributed Management Task Force) that defined a standard named Common Information Model. By the way, this is the same group that defined MOF (Managed Object Framework) which is the standard below the covers of DTC. CIM is defined in the MOF standard and is a cross platform common definition of management information for systems, networks, and applications and services that allows for vendor extensions. How was that for acronym soup?
Update PSModulePath for Custom PowerShell Module Development
I am in the process of a deep dive into DSC and I want to store my custom modules and DSC Resources in source control. To make it easy to run PowerShell modules you have to import them or have them on the PSModulePath environment variable. Since I don’t want to point a source repository to the default PowerShell module path, I want to add my custom module path to PSModulePath. This will save me some time when it comes to having to import modules and module changes. This means I will always be running the most recent version of my modules even the buggy ones, so if you do this, understand the implications.
It’s actually pretty easy to automate this with PowerShell. Since I already have some experience updating environment variables with PowerShell I just created a new script to add my custom module path to PSModulePath.
$currentModulePath = [Environment]::GetEnvironmentVariable("PSModulePath", "Machine")
$customModulePath = "C:\_DSC\DSCResources"
$newModulePath = $currentModulePath + ";" + $customModulePath
[Environment]::SetEnvironmentVariable("PSModulePath", $newModulePath, "Machine")
I complicated this script a bit so it is more self evident on what is happening (code as documentation – no comments necessary).
I can envision someone needing to also remove a path from PSModulePath, but this is enough to get started so I will leave it up to you, until I have a need for that :).
UPDATES
When running this script in an Invoke-Command on a remote session the modules aren’t immediately available if I tried to use modules in the new path. This is because the path is not updated on the remote session. A quick workaround for me was to remove the session and recreate it.
Get-PSSession | Remove-PSSession;
This removes all sessions so you may not want to do this. Since I don’t care about sessions I like it. This was just a one line change in my workflow script and it didn’t cause too much latency in the script execution. I know there are some other solutions that involve messing with the registry, but this is a one time deal so resetting the remote session works for me.
