Category: Planning

GTP for BDD

Graphical Test Plan

I read a little about graphical test planning created by Hardeep Sharma and championed by David Bradley, both from Citrix. It’s a novel idea and sort of similar to the mind map test planning I have played around with. The difference is your not capturing features or various heuristics and test strategies in a mind map, you are mapping expected behavior only. Then you derive a test plan from the graphical understanding of the expected behavior of the system. I don’t know a lot about GTP, so this is a very watered down explanation. I won’t attempt to explain it, but you can read all about it:

Plan Business Driven Development with GTP

What interested me was the fact that I could abstract how we currently spec features into a GTP type model. I know the point of GTP is not to model features, but our specs model behavior and they happen to be captured in feature files. Its classic Behavior Driven Development (BDD) with Gherkin. We have a feature that defines some aspect of value that the system is expected to provide to users. In the feature we have various scenarios that describe the expected behaviors of the feature. Scenarios have steps that define pre-condition, action, and expectations (PAE) or in Gherkin, Given-When-Then (GWT) that define how a user would execute the scenario. We also have feature backgrounds which is a feature wide pre-condition that is shared by all scenarios in the feature.
I said we use Gherkin, but our new test runner transcends just GWT. We can define PAE in plain English without the GWT constraints, we can select the terms to describe PAE instead of being forced to use GWT which sometimes causes us to jump through hoops to force the GWT wording to sound correct. 

GTP Diagram

If we applied something like GTP we would model the scenarios, but there would be more hierarchy before we define the executable scenarios. We currently use tagging to group similar scenarios that exercise a specific subset of a feature’s scenarios. This allows us to provide faster feedback by running checks for just a subset instead of the entire feature when we are only concerned with changes to the subset. In a GTP’ish model the left most portion of the diagram would hold generalized behavior specs, similar to how we use tagging, and as we go to the right the behavior becomes more granular until we hit a demarcation point for executable scenarios that can then be expressed in a linked test case diagram (TCD). In the GTP there are ways to capture meta data like related requirement/ticket ID for traceability back to requirements. Also, meta for demarcation point (can’t think of better name) to link to the TCD or feature file that further defines it.

Test Case Diagram

The test case diagram would define various scenarios that define the behavior of the demarcation points in the GTP. The TCD diagram would also include background preconditions and the steps to execute the scenario. At this point it feels like this is an extra step. We have to write the TCD in a feature file so diagramming it is creating a redundant document that has to be maintained.
In the TCD there are shapes for behavior, preconditions, steps, and expectations. I think there should be additional shapes or meta to express tags because this is important in how we categorize and control running of scenarios. It may help if there is also meta to link back to the GTP that the TCD is derived from so we can flow back and forth between the diagrams. Meta in the TCD is important because it gives us the ability extract understanding outside of just the test plan and design. We could have shapes, meta descriptions and links to
  • execute automated checks
  • open a manual exploratory test tool
  • view current test state (pass/fail)
  • view historical data (how many times has this step failed, when was the last failure of this scenario…)
  • view flake analysis or score
  • view delivery pipeline related to an execution
  • view team members responsible for plan, develop, test and release
  • view related requirement or ticket
  • much more…

Since we also define manual tests by just tagging features or scenarios with a manual tag or creating exploratory test based feature files, we could do this for both automated checks and manual tests.

GTP-BDD Binding

To get rid of the TCD redundancy we could generate the feature file from the diagram or vice-versa. Being able to bind GTP to BDD would make GTP more valuable to me.
We would need an abstract object graph that could be used to generate both the diagram and the feature file (Excel spread sheet, HTML page or whatever else). We are almost here, we have a tool that can generate feature files from persisted objects and vice versa. We would just have to figure out how to generate the diagram and express it as an interactive UI and not just a static picture.
What we have been struggling with is the ability to manually edit feature files and keep that in sync with the persisted objects. With a centralized UI this is easy because everyone uses the UI to update the objects. When people are updating features files from a source code repository we have to worry about merge conflicts (yuck) and if we consider the feature file or the persisted object as the source of truth. So, we may have to reduce flexibility and force everyone to use the UI only. Everyone would have to have discipline and not touch the feature files even though we have nice tools built into our IDE to help write and manage them. The tool would have to detect when someone has violated the policy and so on…I digress.

Conclusion

With a graphical UI modeled on GTP/TCD to manage BDD we can provide an arguably simpler way to visualize tests and provide the ability to drill down to see different aspects of test plans and designs and their related current and historical execution. With 2-way binding from diagram to feature file we have a new way to manage our executable specifications. This model could provide a powerful tool to not only aide test planning, but test management as a whole. The end result would hopefully be a better understanding for the team, increased flow in delivery pipeline, enhanced feedback, and more value to the customer and the business.
Now lets ask Google if something like this already exists so I don’t have to add it to my ever increasing backlog of things I want to build. Thanks to Hardeep Sharma, David Bradley, and Citrix for sharing GTP.

Monitoring Change Tickets in Delivery Pipelines

DevOps sounds cool like some covert special operations IT combat team, but it is missing the boat in many implementations because it only focuses on the relationship between Dev and Ops and is usually only championed by Ops. The name alienates important contributors on the software delivery team. The team is responsible for software delivery including analysis, design, development, build, test, deploy, monitoring, and support. The entire team needs to be included in DevOps and needs visibility in to delivery pipelines from end-to-end. This is an unrelated rant, but this lead me to thinking about how a delivery team can monitor changes in delivery pipelines.

Monitor Change

I believe it is important that the entire team be able to monitor changes as they flow through delivery pipelines.. There are ticket management systems that help capture some of the various stages that a change goes through, but its mostly various project management related workflow stages and they have to be changed manually. I’d like a way to automatically monitor a change as if flows from change request all the way to production and monitor actions that take place outside of the ticket or project management system.

Normally, change is captured in some type of ticket maybe in a project management system or bug database (e.g. Jira, Bugzilla). We should be able to track various activities that take place as tickets make their way to production. We need a way trace various actions on a change request back to the change request ticket. I’d like a system where activities involved in getting a ticket to production automatically generate events that are related to ticket numbers and stored in a central repository.

If a ticket is created in Jira, a ticket created event is created. A developer logs time on a ticket, a time logged activity event is created that links back to the time log or maybe holds data from the time log for the ticket number.

When an automated build that includes the ticket happens, then a build stated activity event is created with the build data is triggered. As various jobs and tasks happen in the automated build a build changed activity event is triggered with log data for the activity. When the build completes a build finished activity event is triggered. There may be more than one ticket involved in a build so there would be multiple events with similar data captured, but hopefully changes are small and constrained to one or a few tickets… that’s the goal right, small batches failing fast and early.

We may want to capture the build events and include every ticket involved instead of relating the event directly to the ticket, not sure; I am brainstorming here. The point is I want full traceability across my software delivery pipelines from change request to production and I’d like these events stored in a distributed event store that I can project reports from. Does this already exists? Who knows, but I felt like thinking about it a little before I search for it.

Ticket Events

  1. Ticket Created Event
  2. Ticket Activity Event
  3. Ticket Completed Event

A ticket event will always include the ticket number and a date time stamp for the event, think Event Sourcing. Ticket created occurs after the ticket is created in the ticket system. Ticket completed occurs once the ticket is closed in the ticket system. The ticket activities are captured based on the activities that are configured in the event system.

Ticket Activity Events

A ticket activity is an action that occurs on a change request ticket as it makes its way to production. Ticket activities will have an event for started, changed, and finished. Ticket activity events can include relevant data associated with the event for the particular type of activity. There may be other statuses included in each of these ticket activity events. For example a finish event could include a status of error or failed to indicate that the activity finished but it had an error or failed.

  • {Ticket Activity} Started
  • {Ticket Activity} Changed
  • {Ticket Activity} Finished

Deploy Started that has deploy log, Build Finished that has the build log, Test Changed that has new test results from an ongoing test run.

Maybe this is overkill? Maybe this should be simplified where we only need one activity event per activity and it includes data for started, changed, finished, and other statuses like error and fail. I guess it depends on if we want to stream activity event statuses or ship them in bulk when an activity completes; again I’m brainstorming.

Activities

Every ticket won’t have ticket activity events triggered for every activity that the system can capture. Tickets may not include every event that can occur on a ticket. Activity events are triggered on a ticket when the ticket matches the scope of the activity. Scope is determined by the delivery team.

Below are some of the types of activity events that I could see modeling for events on my project, but there can be different types depending on the team. So, ticket activity events have to be configurable. Every team has to be able to add and remove the types of ticket activity events they want to capture.

  1. Analysis
    1. Business Analysis
    2. Design Analysis
      1. User Experience
      2. Architecture
    3. Technical Analysis
      1. Development
      2. DBA
      3. Build
      4. Infrastructure
    4. Risk Analysis
      1. Quality
      2. Security
      3. Legal
  2. Design
  3. Development
  4. Build
  5. Test
    1. Unit
    2. Integration
    3. End-to-end
    4. Performance
    5. Scalability
    6. Load
    7. Stress
  6. Deploy
  7. Monitor
  8. Maintain

Reporting and Dashboards

Once we have the events captured we can make various projections to create reports and dashboards to monitor and analyze our delivery pipelines. With the ticket event data we can also create reports at other scopes. Say we want to report on a particular sprint or project. With the ticket Id we should be able to gather this and relate other tickets in the same project or sprint. It would take some though as to whether we would want to capture project and sprint in the event data or leave this until the time when we make the actual projection, but with ticket Id we can expand our scope of understanding and traceability.

Conclusion

The main goal with this exploration into my thoughts on a possible application is to explore a way to monitor change as it flows through our delivery pipelines. We need a system that can capture the raw data for ticket create and completed events and all of the configured ticket activity events that occur in between. As I look for this app, I can refer to this to see if it meets what I envisioned or if there may be a need for this.

Sev1 Incident

I read a book called the Phoenix Project. A surprisingly good book about a company establishing a DevOps culture. One of the terms in the book that I had no experience with was Sev1 incident. I have since heard it repeated and have come to find out that it is part of a common grading of incident severity. Well, I decided to finally research it about a year after I read the book and put more thought into a formalized incident reporting, triage, mitigation, and postmortem workflow. Which is similar to the thoughts I had on triaging failing automated tests.

Severity Levels

So, first to define the severity levels. Fortunately, David Lutz has a good break down on his blog – http://dlutzy.wordpress.com/2013/10/13/incident-severity-sev1-sev2-sev3-sev4-sev5/.

Severity Levels

  • Sev1 Complete outage
  • Sev2 Major functionality broken and revenue affected
  • Sev3 Minor problem, bug
  • Sev4 Redundant component failure
  • Sev5 False alarm or alert for something you can’t fix

Identify Levels

With that I need to define how to identify the levels. IBM has a break down that simplifies it on their Java SDK site – http://publib.boulder.ibm.com/infocenter/javasdk/v1r4m2/index.jsp?topic=%2Fcom.ibm.java.doc.diagnostics.142%2Fhtml%2Fbugseverity.html:

Sev 1

  • In development: You cannot continue development.
  • In service: Customers cannot use your product.

Sev 2

  • In development: Major delays exist in your development.
  • In service: Users cannot access a major function of your product.

Sev 3

  • In development: Major delays exist in your development, but you have temporary workarounds, or can continue to work on other parts of your project.
  • In service: Users cannot access minor functions of your product.

Sev 4

  • In development: Minor delays and irritations exist, but good workarounds are available.
  • In service: Minor functions are affected or unavailable, but good workarounds are available.

Severity Analysis

Now that we have more guidance on identifying the severity of an incident, how should it be reported? I believe that anyone can report an incident, bug, something not working, but it is up to an analyst to determine the severity level of the report.

So, the first step is for the person who discovered the issue to open a ticket. Of course if it is a customer and we don’t have a self-support system, they will probably report it to an employee in support or sales and the employee will create the ticket for the customer. All tickets should be auto routed to the analyst team where it is assigned to an analyst to triage. The analyst will assign the severity level and assign to engineering support where the ticket will be reviewed, discussed and prioritized. The analyst in this instance can be a QA, BA, even a developer assigned to the task, but the point is to have a dedicated team/person responsible.

During the analysis, a time line of the the failure should be established. What led up to the failure, the changes, actions taken, and people involved should all be laid out in chronological order. Also, during triage, a description of how to recreate the failure should be written if possible. The goal is to collect as much information about the failure as possible in one place so that the team can review and help investigate. Depending on the Sev level various degrees of details and speed in which feedback is given should be established.

Conclusion

This is turning out to be a lot deeper than I care to dive into right now, but this gives me food for thought. My take aways so far are to

  • formalize severity levels
  • define how to identify the levels
  • assign someone to do the analysis and assign the levels

 

What Makes a Good Candidate for Test Automation?

Writing large UI based functional tests can be expensive in terms of money and time. It is sometimes hard to know where to focus your test budget. New features are good candidates, especially the most common successful and exceptional paths through the feature. But, when you have a monster legacy application with little to no coverage, where to get the biggest bang for the buck can be hard to ascertain.

Bugs, Defects, Issues…It Doesn’t Work

I believe bugs provide a good candidate for automation, especially if regression is a problem for you. Even if regression is not an issue, its always good to protect against regressions. So, automating bugs are kind of a win-win in terms of risk assessment. Hopefully, when a bug is found whoever finds the bug or whoever adds it to the bug database provides reproduction steps. If the steps are a good candidate for automation, automate it.

Analyzing Bugs

What makes a bug a good candidate for test automation? When analyzing bugs for automated testing I like to evaluate on 4 basic criteria. In descending order of precedence:

  • The steps are easy to model in the test framework.
  • The steps are maintainable as an automated test.
  • The bug was found before.
  • The bug caused a lot of pain to users or the company.

It is just common sense that “bug caused a lot of pain” is the top candidate. If a bug caused a lot of pain, you don’t want to repeat it, unless you like pain. Yet, if the painful bug is a maintenance nightmare as an automated test, the steps are hard to model, and the bug wasn’t found before you may want to just mark it for manual regression. If your test matches 2 or more of the criteria I’d say it is a high priority candidate for test automation.

Conclusion

These are just my opinion and there is no study to prove any of it. I know this has been thought of and pondered, maybe even researched by someone. If you know where I can find some good discussions on this topic or if you want to start one, please let me know.

 

How Much Does Automated Test Maintenance Cost?

I saw this question on a forum and it made me pause for a second to think about it. The quick answer is it varies. The sarcastic answer is it costs as much as you spend on it, or how about, it cost as much as you didn’t spend on creating a maintainable automation project.

I have only been involved in 2 other test automation projects prior to my current position. In both I also had feature development responsibility. On one of the projects, comparing against time developing features, I spent about 10-15% of my time maintaining tests and about 25% writing them. So, that is about 30-40% of my total test time on maintenance. Based on my knowledge today, some of my past tests weren’t that good so maybe the numbers should have been higher or lower. On the other project, test maintenance was closer to 50% and that was because of poor tool choice. I can state the numbers because I tracked my time spent. I could not use these as benchmarks to estimate maintenance cost on my current project or any other unless the context was very similar and I can easily draw the comparison.

I have seen where someone might say “it’s typically between this and that percentage of development cost,” or something similar. Trying to quantify maintenance costs is hard, very hard and it depends on the context. You can try to estimate based on someone else’s guess of a rough percentage and hope it pans out, but in the end it is dependent on execution and environment. An application that changes often vs. one that rarely changes, poorly written automated tests, bad choice of automation framework, skill of the automated tester…there is a lot that can change cost from project to project. I am curious if someone has a formula to calculate an estimate across all projects, but having an insane focus on the maintainability of your automated test suites can significantly reduce costs in the long run. So a better focus, IMHO, is on getting the best test architecture, tools, framework, people and make maintainability a high priority goal. Also properly tracking maintenance in the project management or bug tracking system can provide a more valuable measure of cost across the life of a project. If you properly track maintenance cost (time), you get a benchmark that is customized for your context. Trying to calculate cost up front with nothing to base the calculations on but a wild uneducated guess can lead to a false sense of security.

So, if you are trying to plan a new automation project and you ask me about cost the answer is, “The cost of having automated tests…priceless. The cost of maintaining automated tests…I have no idea.”

Specifications?

This is more of a “what I’m thinking” post than something full of good information. I am trying to frame my idea of feature specifications and I need to answer some questions. This seemed like as good a place as any to store my questions, but you are more than welcome to provide answers and opinions. Continue reading