Archive for September, 2014

What quality metrics are important to you?

I’ve been getting a lot of crap from people I know recently about not posting to my blog in a long time. Hopefully this entry buys me another 3 months until the next post.

At any rate, I wanted to spend some time writing about something I’ve been working on recently having to do with metrics. Now, I’m not a metrics guy by any means and have generally been successful at avoiding metrics hell at many of the smaller companies/shops I have worked with. This time around though, I’m gonna jump on the bandwagon and talk about some metrics I think are really useful and experimental for evaluating your quality approach. Hopefully you will agree that these metrics don’t completely border on ludicrousness and can share some that you find have served you well.

One of my favorite metrics that I look at is the ‘escaped defect’. The escaped defect symbolizes a product issue that is found outside some stage of the software delivery lifecycle. For example, a defect might have escaped a sprint, or a release. You might track this because you want to learn about the issues that are being discovered outside the stage where you feel the overwhelming majority of issues should be discovered. This is useful in a sprint because the intent is to create shippable product. This is useful for after release because it highlights some learning opportunities such as:

  1. Understand how the product was being used if it was caught by a customer
  2. Understand what the gaps in test coverage were
  3. Did we know about this issue before release but decided to deploy anyway? Was this a good decision? Why or why not?

A related experiment I am working on is part I of a design to answer a few questions

  1. How effective is the testing?
  2. Where are we testing the right things and where are we not?
  3. Do we know what testing is valuable and what testing is less so?

This might be a Pandora’s box to open up, but the exercise seems interesting enough so let’s advance forward.

The diagram below shows a mockup of how this can start to be visualized so that you are looking at some data that exists in your organization and associating with each of the nodes. In this diagram we are representing ‘All testing’ that is conducted for a product/platform, and from there you can start to drill into its child nodes. For purposes of this article, we are diving into Manual test cases as an example since it seemed like a reasonable place to start

allTestingTree

The diagram below shows what it might look like if you start to populate it with your actual test and defect data you are receiving from your customers about a particular part of your product or platform.

allTestingTreeNumbers

You can see how you could start to calculate percentages of your testing effort. In this diagram for example

~41% of manual test cases results in a defect being logged

~26% of manual test cases executed resulted in a defect being logged and a problem being fixed.

~35% of all defects opened are not fixed / are duplicates.

 

Diving deeper into this hypothetical 35% of defects that will never get fixed you can explore why this is the case and bubble up some learning’s around duplicates to reduce them moving forward, and also discover the types of defects that won’t get fixed because they were never defects to begin with and might really be enhancement requests or misfiling, etc…

You could analyze the trends over time and start to measure your changes and identify if the outcomes have changed for better or worse and start to shift your efforts to where you think you give more value to your customers in the long haul and lower your quality issues while you scale your customers.

Even more interesting for me is being able to visualize the problem domain so you can wrap your hands around how to move the needle. Enjoy!

Until next time.

Advertisements

Leave a comment