Category: Pipeline

Monitoring Change Tickets in Delivery Pipelines

DevOps sounds cool like some covert special operations IT combat team, but it is missing the boat in many implementations because it only focuses on the relationship between Dev and Ops and is usually only championed by Ops. The name alienates important contributors on the software delivery team. The team is responsible for software delivery including analysis, design, development, build, test, deploy, monitoring, and support. The entire team needs to be included in DevOps and needs visibility in to delivery pipelines from end-to-end. This is an unrelated rant, but this lead me to thinking about how a delivery team can monitor changes in delivery pipelines.

Monitor Change

I believe it is important that the entire team be able to monitor changes as they flow through delivery pipelines.. There are ticket management systems that help capture some of the various stages that a change goes through, but its mostly various project management related workflow stages and they have to be changed manually. I’d like a way to automatically monitor a change as if flows from change request all the way to production and monitor actions that take place outside of the ticket or project management system.

Normally, change is captured in some type of ticket maybe in a project management system or bug database (e.g. Jira, Bugzilla). We should be able to track various activities that take place as tickets make their way to production. We need a way trace various actions on a change request back to the change request ticket. I’d like a system where activities involved in getting a ticket to production automatically generate events that are related to ticket numbers and stored in a central repository.

If a ticket is created in Jira, a ticket created event is created. A developer logs time on a ticket, a time logged activity event is created that links back to the time log or maybe holds data from the time log for the ticket number.

When an automated build that includes the ticket happens, then a build stated activity event is created with the build data is triggered. As various jobs and tasks happen in the automated build a build changed activity event is triggered with log data for the activity. When the build completes a build finished activity event is triggered. There may be more than one ticket involved in a build so there would be multiple events with similar data captured, but hopefully changes are small and constrained to one or a few tickets… that’s the goal right, small batches failing fast and early.

We may want to capture the build events and include every ticket involved instead of relating the event directly to the ticket, not sure; I am brainstorming here. The point is I want full traceability across my software delivery pipelines from change request to production and I’d like these events stored in a distributed event store that I can project reports from. Does this already exists? Who knows, but I felt like thinking about it a little before I search for it.

Ticket Events

  1. Ticket Created Event
  2. Ticket Activity Event
  3. Ticket Completed Event

A ticket event will always include the ticket number and a date time stamp for the event, think Event Sourcing. Ticket created occurs after the ticket is created in the ticket system. Ticket completed occurs once the ticket is closed in the ticket system. The ticket activities are captured based on the activities that are configured in the event system.

Ticket Activity Events

A ticket activity is an action that occurs on a change request ticket as it makes its way to production. Ticket activities will have an event for started, changed, and finished. Ticket activity events can include relevant data associated with the event for the particular type of activity. There may be other statuses included in each of these ticket activity events. For example a finish event could include a status of error or failed to indicate that the activity finished but it had an error or failed.

  • {Ticket Activity} Started
  • {Ticket Activity} Changed
  • {Ticket Activity} Finished

Deploy Started that has deploy log, Build Finished that has the build log, Test Changed that has new test results from an ongoing test run.

Maybe this is overkill? Maybe this should be simplified where we only need one activity event per activity and it includes data for started, changed, finished, and other statuses like error and fail. I guess it depends on if we want to stream activity event statuses or ship them in bulk when an activity completes; again I’m brainstorming.

Activities

Every ticket won’t have ticket activity events triggered for every activity that the system can capture. Tickets may not include every event that can occur on a ticket. Activity events are triggered on a ticket when the ticket matches the scope of the activity. Scope is determined by the delivery team.

Below are some of the types of activity events that I could see modeling for events on my project, but there can be different types depending on the team. So, ticket activity events have to be configurable. Every team has to be able to add and remove the types of ticket activity events they want to capture.

  1. Analysis
    1. Business Analysis
    2. Design Analysis
      1. User Experience
      2. Architecture
    3. Technical Analysis
      1. Development
      2. DBA
      3. Build
      4. Infrastructure
    4. Risk Analysis
      1. Quality
      2. Security
      3. Legal
  2. Design
  3. Development
  4. Build
  5. Test
    1. Unit
    2. Integration
    3. End-to-end
    4. Performance
    5. Scalability
    6. Load
    7. Stress
  6. Deploy
  7. Monitor
  8. Maintain

Reporting and Dashboards

Once we have the events captured we can make various projections to create reports and dashboards to monitor and analyze our delivery pipelines. With the ticket event data we can also create reports at other scopes. Say we want to report on a particular sprint or project. With the ticket Id we should be able to gather this and relate other tickets in the same project or sprint. It would take some though as to whether we would want to capture project and sprint in the event data or leave this until the time when we make the actual projection, but with ticket Id we can expand our scope of understanding and traceability.

Conclusion

The main goal with this exploration into my thoughts on a possible application is to explore a way to monitor change as it flows through our delivery pipelines. We need a system that can capture the raw data for ticket create and completed events and all of the configured ticket activity events that occur in between. As I look for this app, I can refer to this to see if it meets what I envisioned or if there may be a need for this.

Why Would You Have a Go.cd Pipeline with No Materials?

So, I need to run a Go pipeline that isn’t dependent on a material, meaning Go’s out the box implementation of a material. What I’m talking about is a dumb hack to get a production deployment working in an environment with stringent security policies. We have two Go servers. One orchestrates pipelines in our pre-production environments. Because of the security concerns we have a second Go server just to deploy production. When a package has gone through all of the quality gates in pre-production it is ready to deploy to production and the deployable packages are placed in an artifact repository (right now a file share) that is accessible by the production Go server.

When we want to deploy production, we manually trigger a pipeline on the production Go server that kicks off a job to get the deploy package from the package repository and place it in the production Go server’s artifact repository. With the package in the production, all of the production deploy agents have access to deploy it. Normally, this “get files from a share” business would be handled by a material setup in Go, but I couldn’t find one that could poll a file share and sticking these binaries in a source repo that was accessible to the preproduction and production domains sounded like overkill at the time.

What would be even better and make me fall in love with Go even more would be if two server could talk to each other then I could just have the production Go server poll the preprod server to see if it has a new package in some pipeline. I guess I could

  • do a little Java code and do some type of file share repository poller
  • setup Artifactory or NuGet and use the pollers already build for them
  • explore the Go API and see if I could do a service to have the servers talk to each other through the API
  • spend a little more time on Google and Github to see if this has already been solved

Because I’m not fluent in Java, much of this felt like a trip down the rabbit hole and I was already shaving a Yak with trying to get this supposedly quick hack up. Yet, what I did was the simplest thing I could think of to get going.

Anyway, I finally set up a fake git repo and ignored that material I setup to point to it. First setting up Git: initializing a repo, creating a bare repo, cloning the initialized repo into the bare repo and configuring the bare repo as a material in Go. Then on the Stage config there is an option for “Fetch Materials”. Go will only do material updates and checkouts if this is checked, so I unchecked it. On the Material config there is an option for “Poll for new changes”. This will stop Go from polling the material for changes, but you still have to set one up. It has to be a valid material. You can’t just put “myfakegitrepo” in the URL, I tried… it failed.

So, although you can’t get away with not using a material you can make the material insignificant after it is setup. I hope this doesn’t help anyone, if it does you are probably doing something complex and wrong like me.

Confirming MSBuild Project Dependency Build Behavior

So we have one giant Visual Studio solution that builds every application project for our product. It is our goal to one day be able to build and deploy each application independently. Today, we have a need to build one project for x86 and all others as x64. This requirement also provides a reason to explore per application build and deploy. The x64 projects and x86 project share some of the same dependencies and provides a good test for per application build. The purpose of this exploration is to determine the best way to automate the separate platform builds and lay groundwork for per application build. These are just my notes and not meant to provide much meat.

First, I will do some setup to provide a test solution and projects to experiment with. I created 4 projects in a new solution. Each project is a C# class library with only the default files in them.

  • ProjA
  • ProjB
  • ProjC
  • ProjD

Add project dependencies

  • ProjA > ProjB
  • ProjB > ProjC, ProjD
  • ProjC > ProjD

Set the platform target (project properties build tab) for each project like so

  • ProjA x64
  • ProjB x64
  • ProjC x64
  • ProjD x86

Behavior when a Dependent Project is Built

Do a Debug build of the solution and inspect the bin folders.

  • BuildTest\ProjA\bin\Debug
    • ProjA.dll               10:03 AM             4KB        B634F390-949F-4809-B937-66069C5F058E              v4.0.30319 / x64
    • ProjA.pdb             10:03 AM             8KB
    • ProjB.dll                10:03 AM             4KB        B0C5B475-576D-44D2-BD41-135BDA69225E          v4.0.30319 / x64
    • ProjB.pdb             10:03 AM             8KB
  • BuildTest\ProjB\bin\Debug
    • ProjB.dll               10:03 AM             4KB        B0C5B475-576D-44D2-BD41-135BDA69225E          v4.0.30319 / x64
    • ProjB.pdb            10:03 AM             8KB
    • ProjC.dll               10:03 AM             4KB        DBB9482F-6609-4CA5-AB00-009473E27CDA          v4.0.30319 / x64
    • ProjC.pdb            10:03 AM             8KB
    • ProjD.dll               10:03 AM             4KB        4F0F7877-5046-4A32-8B8E-FAD8E2660CE6            v4.0.30319 / x86
    • ProjD.pdb            10:03 AM             8KB
  • BuildTest\ProjC\bin\Debug
    • ProjC.dll               10:03 AM             4KB        DBB9482F-6609-4CA5-AB00-009473E27CDA          v4.0.30319 / x64
    • ProjC.pdb            10:03 AM             8KB
    • ProjD.dll               10:03 AM             4KB        4F0F7877-5046-4A32-8B8E-FAD8E2660CE6            v4.0.30319 / x86
    • ProjD.pdb            10:03 AM             8KB
  • BuildTest\ProjD\bin\Debug
    • ProjD.dll              10:03 AM             4KB        4F0F7877-5046-4A32-8B8E-FAD8E2660CE6            v4.0.30319 / x86
    • ProjD.pdb           10:03 AM             8KB

Do a Debug rebuild of ProjA

  • ProjA all DLLs have new date modified.
  • ProjB all DLLs have new date modified.
  • ProjC all DLLs have new date modified.
  • ProjD all DLLs have new date modified.

Do a Debug build of ProjA

  • ProjA no DLLs have new date modified.
  • ProjB no DLLs have new date modified.
  • ProjC no DLLs have new date modified.
  • ProjD no DLLs have new date modified.

Change Class1.cs in ProjA and do a Debug build of ProjA

  • ProjA: ProjA.dll has new data modified, ProjB.dll does not.
  • ProjB no DLLs have new date modified.
  • ProjC no DLLs have new date modified.
  • ProjD no DLLs have new date modified.

Change Class1.cs in ProjB and do a Debug Build of ProjA

  • ProjA all DLLs have new date modified.
  • ProjB: ProjB.dll has new data modified, ProjC.dll and ProjD do not.
  • ProjC no DLLs have new date modified.
  • ProjD no DLLs have new date modified.

We change Class1.cs in ProjC and do a Debug Build of ProjA

  • ProjA all DLLs have new date modified.
  • ProjB: ProjB.dll and ProjC.dll have new data modified, and ProjD does not.
  • ProjC ProjC.dll has new date modified, and ProjD does not.
  • ProjD no DLLs have new date modified.

We change Class1.cs in ProjD and do a Debug Build of ProjA

  • ProjA all DLLs have new date modified.
  • ProjB all DLLs have new date modified.
  • ProjC all DLLs have new date modified.
  • ProjD all DLLs have new date modified.

Conclusion

  • If a dependency has changes it will be built when the dependent project is built.

Behavior When Project with Dependencies is Built

Next, I want to verify the behavior when a project that has dependencies is built.

Clean the solution and do a debug build of the solution.

Do a Debug build of ProjD

  • ProjA no DLLs have new date modified.
  • ProjB no DLLs have new date modified.
  • ProjC no DLLs have new date modified.
  • ProjD no DLLs have new date modified.

We change Class1.cs in ProjD and do a Debug build of ProjD

  • ProjA no DLLs have new date modified.
  • ProjB no DLLs have new date modified.
  • ProjC no DLLs have new date modified.
  • ProjD all DLLs have new date modified.

We change Class1.cs in ProjC and do a Debug Build of ProjD

  • ProjA no DLLs have new date modified.
  • ProjB no DLLs have new date modified.
  • ProjC no DLLs have new date modified.
  • ProjD no DLLs have new date modified.

Conclusion

  • If a project with dependencts is built, any projects that depend on it will not be built.

Behavior When Bin is Cleaned

I manually deleted DLLs in ProjD and built ProjA and the DLLs with same date modified reappeared. Maybe they were fetched from obj folder.

I do a clean on ProjD (this cleans obj) and build ProjA and new DLLs are added to ProjD.

Conclusion

  • Obj folder acts like a cache for builds.

Behavior when External Dependencies are Part of Build

Add two new projects to solution

  • ExtA x64 > ExtB
  • ExtB x86

Updated these projects so they output to the solution Output/Debug folder.

Added references to the ExtA and ExtB output DLLs

  • ProjA > ExtA
  • ProjB > ExtB

I did a solution rebuild and I noticed something that may also be a problem in other tests. When building ProjC, ProjD, and ExtA we get an error:

warning MSB3270: There was a mismatch between the processor architecture of the project being built “AMD64” and the processor architecture of the reference. This mismatch may cause runtime failures. Please consider changing the targeted processor architecture of your project through the Configuration Manager so as to align the processor architectures between your project and references, or take a dependency on references with a processor architecture that matches the targeted processor architecture of your project.

Also, ProjA and ProjB are complaining about reference resolution:

warning MSB3245: Could not resolve this reference. Could not locate the assembly “ExtA”. Check to make sure the assembly exists on disk.

In Visual Studio I update the dependency for ProjA and ProjB to include the Ext projects. This fixes the MSB3245 error.

Conclusion

  • We need to build all dependencies with the same platform target as the dependent.
  • We need to build external references before building any dependents of the external references (e.g. get NuGet dependencies).
  • When a solution contains a project that is dependent on another project, but does not have a project reference, update the dependency to force the dependency to build first.

Separating Platform Builds

Add new Platforms for x64 and x86. Update configuration so each project can do a x86 and x64 build. Have Ext project output to x86 and x64 folders for release and debug builds.

Add new projects for ExtC and ExtD and have respective Proj reference their release output. ProjC should ref ExtC x64 release and ProjD should ref ExtD x86 release.

Issue Changing Platform on Newly Add Solution Projects

So, I am unable to change platform target for ExtD/C as x86 and x64 do not appear in drop down and I can’t add them because UI says they are already created. I manually add them to project file.

 

<PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Debug|x64'">
    <DebugSymbols>true</DebugSymbols>
    <OutputPath>bin\x64\Debug\</OutputPath>
    <DefineConstants>DEBUG;TRACE</DefineConstants>
    <DebugType>full</DebugType>
    <PlatformTarget>x64</PlatformTarget>
    <ErrorReport>prompt</ErrorReport>
    <CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|x64'">
    <OutputPath>bin\x64\Release\</OutputPath>
    <DefineConstants>TRACE</DefineConstants>
    <Optimize>true</Optimize>
    <DebugType>pdbonly</DebugType>
    <PlatformTarget>x64</PlatformTarget>
    <ErrorReport>prompt</ErrorReport>
    <CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Debug|x86'">
    <DebugSymbols>true</DebugSymbols>
    <OutputPath>bin\x86\Debug\</OutputPath>
    <DefineConstants>DEBUG;TRACE</DefineConstants>
    <DebugType>full</DebugType>
    <PlatformTarget>x86</PlatformTarget>
    <ErrorReport>prompt</ErrorReport>
    <CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|x86'">
    <OutputPath>bin\x86\Release\</OutputPath>
    <DefineConstants>TRACE</DefineConstants>
    <Optimize>true</Optimize>
    <DebugType>pdbonly</DebugType>
    <PlatformTarget>x86</PlatformTarget>
    <ErrorReport>prompt</ErrorReport>
    <CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
  </PropertyGroup>

No I can update the output path for ExtC/D and update Configuration Manager to proper platform.

Since ProjD is exclusively an x86 projects I removed them from the build for x64. I have ExtD building both x86 and x64. I updated dependencies so ExtC/D build before ProjC/D.

Final Conclusion

This was a bit much to verify what may be common knowledge on the inter-webs, but I wanted to see for myself. There is more that I want to experiment with like NuGet, build performance and optimization, but this gave me enough to move forward with an initial revamp of our automated build. I am going to proceed with a separate pipeline for the an x86 build of the entire solution and a separate deploy for the x86 application. I really believe that going forward that NuGet can become a key tool in standardizing and optimizing per application build and deployment.

Summary

  • If a dependency has changes it will be built when the dependent project is built. Otherwise it will not be rebuilt. This is all a feature of MSBuild. When we move to per application build we will have to build separate solutions for each application and build in isolated pipelines. To prevent conflicts with other applications, we should build dependencies that are shared across applications in a separate pipeline.
  • If a project with dependents is built, any projects that depend on it will not be built. If we
  • Obj folder acts like a cache for builds. We can extend this concept to a common DLL repo where all builds send their DLLs, but we would need a confident way of versioning the DLLs so that we always use the most recent or specific version… sounds like I am proposing NuGet (smile).
  • We need to build all dependencies with the same platform target as the dependent. We may build the main solution as x64 and build other projects that need x86 separately. I believe this would be the most efficient since the current x86 projects will not change often.
  • We need to build external references before building any dependents of the external references (e.g. get NuGet dependencies). We do this now with NuGet, we fetch packages first, but when we move to per application build and deploy this will automatically be handled by Go.cd’s fan in feature. We will have application builds have pipeline dependencies on any other application builds it needs. This will cause the apps to always use the most recent successful build of an application. We can make this stronger by having application depend on test pipelines to insure the builds have been properly tested before integration.
  • When a solution contains a project that is dependent on another project, but does not have a project reference, update the dependency to force the dependency to build first.

NOTE

This is actually a re-post of a post I did on our internal team blog. One comment there was we should also do an Any CPU build then we can have one NuGet will all versions.

Thoughts on DevOps

I am not a DevOps guru. I have been learning DevOps and Continuous Improvement for about 6 years now. I wanted to blog about some of what I have learned because I see companies doing it wrong. I wanted to start internalizing some of the lessons I have learned and the grand thoughts I have had just in case someone asks me about DevOps one day.

DevOps is a Religion

I’m not going to define DevOps because there is enough of that going on (https://en.wikipedia.org/wiki/DevOps). I will say that you can’t hire your way to DevOps because it isn’t a job title. You can’t have one team named DevOps and declare you are doing DevOps. Everyone on your application delivery teams have to convert to DevOps. When you only have one team enabling some DevOps practices through tools and infrastructure you are only getting a piece of the DevOps pie. Until you have broken down the silos and increased communication you haven’t realized DevOps.

Do not focus on implementing DevOps by creating another silo in a “DevOps” team. You can create an implementation team that focuses on DevOps processes, tools, and infrastructure, but if this will be a long lived team call them a Delivery Systems team or Delivery Acceleration team and make sure they are embedded in sprint teams and not off in some room guarded by a ticket system. As with some religions, you have to congregate. Your delivery team has to communicate with each other outside of tickets and email.

When you name the team DevOps it pushes responsibility for DevOps to that team, but the byproduct of DevOps is the responsibility of the entire delivery team. This is the same problem with a QA team, your QA team is not responsible for quality, the entire delivery team is responsible for quality. When you have silos like these, it is hard to get a “One Delivery Team” mindset. Find ways to break down silos, then you won’t be one of those companies that missed the DevOps boat because you couldn’t get your new silo’d DevOps team to delivery on the promises of DevOps.

Fast Feedback is a Main By Product

One of the main benefits of doing continuous anything (DevOps includes continuous improvement processes), is you get fast feedback. The tighter, faster your feedback loops the faster you can iterate. Take a small step, get feedback, adjust based on the feedback, and iterate. It’s not rocket science, its simplification. Work in smaller batches, talk about how to make the next batch better; watch your automation pipelines and KPIs, talk about how to make your pipelines and KPIs better… TALK.

Collaboration is the Key that Unlocks the Good Stuff

Having the entire delivery team involved and talking is key. The Business, QA, Security, IT, Operations, Development… everyone must communicate to insure the team delivers the value that end users are looking for. Give end users value, they give the business value, loop. Having a delivery team that huddles in their silos with minimum communication with other teams is a good way to short circuit this loop. DevOps is a way of breaking down the silos and improving collaboration. DevOps is not the best name to convey what it can deliver. Just remember that the DevOps way should extend beyond the development and operations team.

Automation is the Glue that Binds Everything

Having an automated delivery pipeline from source check-in to production enables you to have a repeatable delivery process that is capable of automatically providing fast feedback. It gives the entire team a way to start and stop the pipeline and monitor the pipeline to adjust based on feedback from the pipeline. It also aides in collaboration by providing dashboards and communication mechanisms accessible by the entire delivery team.

If you have no automation, start with automating your build on each check-in. Then automate running of unit tests, then deployment to a test environment, running automated functional tests, deploy to the next environment. Don’t forget virtualization. Figure out how you can virtualize your environments and automate the provisioning of an environment to run your apps in. Start where you are and focus on adding the next piece until you can automatically build once and deploy and test all the way to production. Iterate your way continuous delivery.

Virtualization is Magic Pixie Dust

Many people I have asked think of the DevOps as virtualization and automated server configuration and provisioning. Even though this isn’t everything in DevOps, it’s a big part of it. Being able to spin up a virtual environment to run a test removes environments as a hindrance to more testing. Being able to spin up a virtualized mock environment for a third party service that is not ready allows us to test in spite of the missing dependency. Virtualization in production allows us to hot swap the current environment with a new one when we are ready for the next release or when production nodes are being hammered or being otherwise unruly. Codifying all of this virtualization allows us to treat our infrastructure just like we do product code. We can manage changes in a source control repository and automatically run the infrastructure code as part of our delivery process.

Quality, Security and Health Come First

Before one line of code is written on a change, an analysis of the desired change must be done before delivering it. I’m not saying a large planning document has to be done. The team has to talk through the potential effect on quality, security and health (QSH) and it makes sense to record these discussions somewhere to be used during the iteration. You can create a doc or record it in a ticket, but QSH must be discussed and addressed during the iteration.

QSH is not something that happens after development has declared code complete. It should happen in parallel with development. There should be automated unit, integration and end-to-end checks. There should be automated static analysis and security checks. A load test and analysis of health monitors should be measuring how the applications is responding to changes. This all should happen during development iterations or as close to development as possible.

On a side note, in Health I am lumping performance, scale, stress and any type of test where a simulated load is tested against the application. This could be spinning up a new virtualized environment, running automated tests then turning off the database or a service to see what happens. Health is attempting to introduce scenarios that will give insight into how the application will respond to changes. It may take a lot to get to the level of Netflix and its chaos monkey in production, but having infrastructure and tests in preproduction to measure health will give you something instead of being totally blind to health issues.

Conclusion

I know there is no real meat here or guidance on how to do these things, but that’s what Google is for or read Gene Kim’s the Phoenix Project. Anyway, I may be a little naive on a few points, but the gist is DevOps is more than a job or team title, its more than development and operations signing a peace treaty, more than automated server configuration. Think of it as another step in improving your continuous improvement process with a focus on cross team collaboration where you break down the silos separating all of the teams that deliver your application.

Video Recording C# WebDriver Tests in TestPipe

The title is a little misleading because you can use the technique below to do a screen capture of anything happening on the screen and not just WebDriver tests. Yet, TestPipe does use C# WebDriver out the box so we will be recording these types of tests.

So, we want to add video recording tests to TestPipe. At first I thought this would be very difficult, but after finding Microsoft Expression Encoder SDK it became a lot easier. I was even able to find other people that have used this SDK which made a decision to move forward with this a little easier to take on.

First, I read the Working with Screen Capture section of the Overview of the Expression Encoder SDK. From this I learned that I needed to create an instance of ScreenCaptureJob. The question is, where do I create it?

In TestPipe we have a ScenarioSession class that holds state of various things while a test scenario runs and it makes sense to expose this new functionality there because we want to be able to control video recording within the context of individual test scenarios. Do we add a new property on the session or should it be a new property on the IBrowser interface. We already have a TakeScreenshot method on IBrowser. Yet, I don’t think it is a good fit on the browser interface because there is a bit of setup that needs to take place for ScreenCaptureJob that is out of scope for a browser and I don’t want to muddy up the API more than it already is.

When we setup a scenario we want to allow setup of the ScreenCaptureJob based on configuration for a feature and/or a scenario. We define features and scenarios in a text file, currently using Gherkin, and we store data used in feature and scenario tests in a JSON file. So, we have to configure video recording in the Gherkin, JSON or both.

Do we keep all recordings or only failing recordings? What if we want to keep only failing, but from time to time we need non-failing recordings for some reason? Do we overwrite old recordings or store in unique folders or filenames?

To trigger the recording we could use tags. If an @Video tag is present on the scenario or feature, record the scenario(s) and only keep the recording if the scenario fails. If the @Debug tag is present on the Feature or the Scenario, keep the recordings even if they don’t fail.

We can create a unique folder for the recordings so that we can store videos of multiple runs of the same scenario. We may want to think about how we clean these up, but we may have enough file clean up in other processes. We will just have to watch hard drive space in production use.

So, we have a strategy to automatically configure recording. Now, we have to implement it in a way that also allows manual configuration just in case we want to hard wire video recording in a test.

So, I found our seam to make the changes for video recording. In our RunnerBase class we have methods to setup and teardown a scenario. It is there that we will make the change to configure, start, stop, and delete video recordings.

Now to implement. First I download the encoder from http://www.microsoft.com/en-us/download/details.aspx?id=27870. This will have to be installed on every server that will run tests so I create a Powershell script to install it. It would be nice to also do a Chocolatey package, but that is overkill for me because I am not using Chocolatey on my servers right not. You can create your own automated installer by extracting the setup file from the download then creating a Powershell script to run

setup.exe -q

to quietly install. I believe you can use the -x parameter to uninstall, but I didn’t test this yet. (Assuming msiexe command line options are used https://msdn.microsoft.com/en-us/library/aa367988(v=vs.85).aspx)

With the encoder installed we have access to the DLLs that we need to work with. In VisualStudio I add a reference to the extensions for Microsoft.Expression.Encoder, Microsoft.Expression.Encoder.Api2, Microsoft.Expression.Encoder.Types, and Microsoft.Expression.Encoder.Utilities. Not sure if I need all these, but they were added by the installer so I will keep them for now.

From here I can add a using

using Microsoft.Expression.Encoder.ScreenCapture;

and implement recording based on the sample code, updating to fit TestPipe standards.

One caveat is the encoder outputs some kind of Microsoft proprietary video format xesc. I thought about collecting all the videos that are kept at the end of a test run and run some kind of parallel task to convert them to a more portable format. In the end, I just left it alone. This is a new feature and only my team will be looking at the videos and everyone has Windows Media Player that can play the format.

I won’t write more on implementation details because I am boring myself, but if you want to check it out you can view it on GitHub (RunnerBase is where we use the recorder and you should be able to figure out the rest). One interesting twist is we implemented Expression Encoder behind an interface so that it isn’t requirement to use TestPipe. If we didn’t do this, you wouldn’t be able to build or use TestPipe without first installing the dependent encoder.

So, TestPipe comes out the box with a dummy implementation of the interface that won’t actually do the recordings. If you want to capture actual recording you can use the TestPipe.MSVideoRecorder plug-in or implement the IVideoRecorder interface on another screen capture program to enable video recording of tests. Right now TestPipe.MSVideoRecorder, is included in the TestPipe solution, but it is not set to build automatically. When we make changes we set it to build and manually move the binary to the folder we have configured to hold the video recorder plug-ins. Eventually, we will move it to a separate repository and create a NuGet package, but I’m tired.

References

Overview of the Expression Encoder SDK – https://msdn.microsoft.com/en-us/library/Gg602440(v=Expression.40).aspx

Road to screen recording in webdriver with C# – http://roadtoautomation.blogspot.com/2013/07/road-to-screen-recording-in-webdriver.html

Record video of your Selenium Tests – https://blog.testingbot.com/2011/12/19/record-video-of-your-selenium-tests

What is Object Oriented Programming?

Until today I never contemplated what object orientated programming means. I have thought about OOP in the context of writing object oriented code and figuring out how to be better at OOP, but I never really studied its original design and intent. I was listening to a .Net Rocks podcast on the subject of lean functional programming and they had a discussion that led to Dr. Alan Kay’s definition of OOP. Dr. Kay coined the phrase object oriented programming during his work developing the SmallTalk language. Even though he is credited with the term OOP, what OOP became is not the same as he envisioned it.

I did a little research and landed on this email thread between Stefan Ram and Dr. Kay.

http://www.purl.org/stefan_ram/pub/doc_kay_oop_en

Dr. Kay credits his thoughts on OOP to his background in biology and mathematics. He says,

I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning — it took a while to see how to do messaging in a programming language efficiently enough to be useful). Dr. Alan Kay, July 17, 2003

Even polymorphism wasn’t a part of the his original thoughts on OOP. He proposed another term to describe behavior in related objects, “genericity.”

He ends the thread by giving his definition of OOP,

OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. Dr. Alan Kay, July 23, 2003

Doesn’t this sounds like functional programming?

Even though this is years after Dr. Kay’s original OOP work he may have had changes in thought over the years. Yet, I still believe that his intent is still true here. So it seems that through the years, OOP may have been forked or hijacked. Some argue that we are paying for it today. They say that OOP did not stay true to the OOP of Dr. Kay and his peers in the early days of OOP (am I spamming OOP here?).

Now, I have another reason to learn F#. With the little I know now, I have already started trying to remove the ill affects of mutable state and use message passing to communicate across layers. I know there is so much more to learn and I have been doing this a long time. I look forward to digging into F# maybe even Erlang to see how it can influence my OOP.

Cross Domain PowerShell Remoting [Fail]

I tried to run our PowerShell environment configuration scripts today and got hit with a nasty error. I double checked my credentials so I know that wasn’t the issue. The scripts worked just a month ago, but we did have some stupid security software installed on our workstations that may be adjusting how remoting works. Let’s see if I can get around it before I open a ticket and start complaining.

Here is the error. This results from a simple call to New-PSSession. The other server is in another domain, but like I said this has been working just fine.

 New-PSSession : [agpjaxd1pciapp1] Connecting to remote server agpjaxd1pciapp1 failed with the following error message : WinRM cannot process the request. The following error with errorcode 0x80090311 occurred while using Kerberos authentication: There are currently no logon servers available to service the logon request.
 Possible causes are:
  -The user name or password specified are invalid.
   -Kerberos is used when no authentication method and no user name are specified.
   -Kerberos accepts domain user names, but not local user names.
   -The Service Principal Name (SPN) for the remote computer name and port does not exist.
   -The client and remote computers are in different domains and there is no trust between the two domains.
  After checking for the above issues, try the following:
   -Check the Event Viewer for events related to authentication.
   -Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or use HTTPS transport.
  Note that computers in the TrustedHosts list might not be authenticated.
    -For more information about WinRM configuration, run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic.

After I read this, I just stared at this for about 5 minutes; deer in the head lights.

I found some hope on the PowerShell Scripter’s friend, “Hey Scripting Guy” blog – http://blogs.technet.com/b/heyscriptingguy/archive/2013/11/29/remoting-week-non-domain-remoting.aspx.

Anyway, the solution from Honorary Scripting Guy, Richard Siddaway was to add the computer I am connecting to the the trusted host list. The trusted host list basically tells your computer, “Hey, you can trust this computer, go ahead and share my sensitive and private credentials with the.” So, be careful with this.

You can view the trusted host list with this PowerShell command.

Get-Item -Path WSMan:\localhost\Client\TrustedHosts

You can add a computer to the trusted list with this command.

Set-Item -Path WSMan:\localhost\Client\TrustedHosts -Value 'computerNameOfRemoteComputer'
[Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"): Y

Now, I run the configuration script and I am deer in the head lights again.

New-PSSession : Opening the remote session failed with an unexpected state. State Broken.

Such a helpful error message. Stackoverflow – http://stackoverflow.com/questions/30617304/exchange-remote-powershell-gets-sporadic-broken-state. Looks like it may be a timeout, and I’m feeling that because the script sat on “Creating Session” forever (why so long is probably the next question). I update my script to increase timeout.

$so = New-PSSessionOption -IdleTimeout 600000
$Session = New-PSSession -ComputerName $node.ComputerName -Credential $credential -SessionOption $so;

10 minute timeout is good right? So, I try again and State is still Broken. Not mission critical at the moment so I will investigate further later.

You can read more about possible solutions at the links above.

Multitenant Thoughts

I am building my 3rd multitenant SAAS solution. I am not referencing any of my earlier work because I think they were way more work than they should have been. Also, I have since moved on from the whole ASP.net web forms development mindset and I want to start with a fresh perspective instead of trying to improve my big balls of spaghetti code.

Today, my thoughts center around enforcing the inclusion and processing of a tenant ID in every command and query. My tenant model keeps all tenant data in a shared database and tables. To keep everything segregated every time I write data and read data there has to be a tenant ID included so that we don’t mess with the wrong tenants data.

I have seen all kinds of solutions for this, some more complicating than I care to tackle at this moment. I am currently leaning towards enforcing it in the data repository.

I am using a generic repository for CRUD operations and an event repository for async event driven workflows. In the repository API’s I want to introduce a validated parameter for tenant ID in every write and read operation. This will force all clients to provide the ID when they call the repos.

I just have to update a couple classes in the repos to enforce inclusion of the tenant ID when I write data. Also, every read will use the tenant ID to scope the result set to a specific tenant’s data. I already have a proof of concept for this app so this change will cause a breaking change in my existing clients, but still not a lot of work considering the fact that I almost decided to enforce the tenant ID in a layer higher than the repo, which would have been a maintenance nightmare.

Is this best practice? No. I don’t think there is a best practice besides the fact that you should use a tenant ID to segregate tenant data in a shared data store. This solution works for my problem and I am able to maintain it in just a couple classes. If the problem changes I can look into the fancy solutions I read about.

Now, how will I resolve the tenant ID? Sub-folder, sub-domain, query string, custom domain…?

CQRS is so easy.

This is just a quick post as I sit in the hospital bored to death and recovering.

I have been intrigued by CQRS, command and query responsibility segregation, ever since I heard about it in a talk by Greg Young at the first Code on the Beach conference. I decided to surf the blogosphere to see what people are doing these days with CQRS. It seems like there is still quite a bit of confusion about what it is.

I have to admit, at first it was extremely intimidating to me too. Not because CQRS is hard, but like many people, I blurred the lines between CQRS, DDD, and event sourcing. When you look at CQRS in the context of everything it is not, it tends to look more complicating than it really is.

I am going to borrow a trick that Greg uses and show a code example of the simplicity of CQRS. Let’s say we have a service that provides an API to manage a product catalog:

ProductManager

void CreateProduct(Product product)
Product GetProduct(string productId)
bool IsProductInStock(string productId)
void UpdateProductPrice(string productId, decimal price)
void RemoveProduct(string productId)
List<Product> GetProductRecommendations(string productId)

If we apply CQRS to this service, we would simply wind up with two services. One that we can optimize for write operations (commands) and another that is optimized for read operations (queries).

ProductQueries

Product GetProduct(string productId)
bool IsProductInStock(string productId)
List<Product> GetProductRecommendations(string productId)
ProductCommands

void CreateProduct(Product product)
void UpdateProductPrice(string productId, decimal price)
void RemoveProduct(string productId)

Easy, peazy… With the read and write operations segregated into their own APIs you are free to do all sorts of fancy optimizations and are on better footing to explore DDD and event sourcing.

Conclusion

The point is CQRS is just an extension of the simple CQS pattern that moves the concept a step further from the method to the class or API level. Nothing more, nothing less. I believe most applications can benefit from CQRS even if you aren’t going to do DDD or event sourcing. So, read all about CQRS, but if you are studying more than simple separation of read and write operations you have read too far and getting into other concepts.

GoCD: Install Multiple Agents with Powershell, Take 2

I wrote about how to Automate Agent Install with PowerShell and thought I would provide the script I am using now since I recently had to deploy some new agents. The script is below and it is pretty self explanatory and generally follows my previous blog post and the Go.cd documentation.

We basically, copy an existing agent to a new location, remove some files that are agent specific, and create a Windows service to run the agent. Until I feel the pain of having to do it, I set the service account/password and start the service manually. Also, I configure the agent on the server manually through the Go.cd UI. When I have to install more agents I will probably automate it then.

$currentAgentPath = "D:\Go Agents\Internal\1";
$newAgentName = "Go Agent Internal 3";
$newAgentPath = "D:\Go Agents\Internal\3\";

Write-Host "Copying Files"
Copy-Item "$currentAgentPath\" -Destination $newAgentPath -Recurse;

Write-Host "Deleting Agent Specific Files"
$guidText = "$newAgentPath\config\guid.txt";

if (Test-Path $guidText)
{
 Remove-Item $guidText;
}

Remove-Item "$newAgentPath\.agent-bootstrapper.running";

Write-Host "Create Agent Service"
New-Service -Name $newAgentName -Description $newAgentName -BinaryPathName "`"$newAgentPath\cruisewrapper.exe`" -s `"$newAgentPath\config\wrapper-agent.conf`"";

#$credential = Get-Credential;
#Eventually, we will write a function to set the service account and password and start the service would be nice to have a way to automatically configure the agent on the server too.

I guess I decided to do the work for you 🙂

Enjoy