Testers do a great job of breaking software but, I think testing the product like our users is a much more elusive idea that we struggle with. I was reminded of this recently with a couple examples that I will generalize, but found them interesting enough to share
- If you have installation instructions that live within your product, you might want to click the link to them and make sure they work. When testers get comfortable taking shortcuts because it’s faster they bypass how their customers use their product and miss completely obvious things.
- When you are a tester and are installing the product to test it completely differently than how you are telling your customers to install the product to use it you may find that this saves you time, but creates a lot of customer support calls and incidents in the long run. Try following the documentation like you ask your customers to do, and see how you like it. Maybe the documentation needs to be improved.
I’ve been getting a lot of crap from people I know recently about not posting to my blog in a long time. Hopefully this entry buys me another 3 months until the next post.
At any rate, I wanted to spend some time writing about something I’ve been working on recently having to do with metrics. Now, I’m not a metrics guy by any means and have generally been successful at avoiding metrics hell at many of the smaller companies/shops I have worked with. This time around though, I’m gonna jump on the bandwagon and talk about some metrics I think are really useful and experimental for evaluating your quality approach. Hopefully you will agree that these metrics don’t completely border on ludicrousness and can share some that you find have served you well.
One of my favorite metrics that I look at is the ‘escaped defect’. The escaped defect symbolizes a product issue that is found outside some stage of the software delivery lifecycle. For example, a defect might have escaped a sprint, or a release. You might track this because you want to learn about the issues that are being discovered outside the stage where you feel the overwhelming majority of issues should be discovered. This is useful in a sprint because the intent is to create shippable product. This is useful for after release because it highlights some learning opportunities such as:
- Understand how the product was being used if it was caught by a customer
- Understand what the gaps in test coverage were
- Did we know about this issue before release but decided to deploy anyway? Was this a good decision? Why or why not?
A related experiment I am working on is part I of a design to answer a few questions
- How effective is the testing?
- Where are we testing the right things and where are we not?
- Do we know what testing is valuable and what testing is less so?
This might be a Pandora’s box to open up, but the exercise seems interesting enough so let’s advance forward.
The diagram below shows a mockup of how this can start to be visualized so that you are looking at some data that exists in your organization and associating with each of the nodes. In this diagram we are representing ‘All testing’ that is conducted for a product/platform, and from there you can start to drill into its child nodes. For purposes of this article, we are diving into Manual test cases as an example since it seemed like a reasonable place to start
The diagram below shows what it might look like if you start to populate it with your actual test and defect data you are receiving from your customers about a particular part of your product or platform.
You can see how you could start to calculate percentages of your testing effort. In this diagram for example
~41% of manual test cases results in a defect being logged
~26% of manual test cases executed resulted in a defect being logged and a problem being fixed.
~35% of all defects opened are not fixed / are duplicates.
Diving deeper into this hypothetical 35% of defects that will never get fixed you can explore why this is the case and bubble up some learning’s around duplicates to reduce them moving forward, and also discover the types of defects that won’t get fixed because they were never defects to begin with and might really be enhancement requests or misfiling, etc…
You could analyze the trends over time and start to measure your changes and identify if the outcomes have changed for better or worse and start to shift your efforts to where you think you give more value to your customers in the long haul and lower your quality issues while you scale your customers.
Even more interesting for me is being able to visualize the problem domain so you can wrap your hands around how to move the needle. Enjoy!
Until next time.
Do you have a ‘QA’ bottleneck? Is it really getting you down? You know, holding you back from being really productive. Well, your in luck. If you follow some simple steps you will be well on your way to highly productive sprints and even higher quality product releases. But first, a short read to make it easier to visualize. If you have ants in your pants, go ahead and skip to the end to see the summary. Jeez, some people can be so impatient. Okay, now we have this ‘QA’ bottleneck, right?
First, your problem is probably your misunderstanding of what quality assurance is. Don’t worry, there are a lot of people out there who think like you, you are not alone. You probably think that because you have a group called ‘QA’ that they are solely responsible for quality. I mean come on, it says ‘Quality’ in the name so it must be true. But this thinking is part of the problem, you’ve been institutionalized to think that quality is the responsibility of one group, but…wait for it…here it comes…it isn’t. If you change the mentality such that everyone has a role to play in quality, you have begun a cultural transformation and are thinking differently already about how to solve your testing challenges.
Now, even if you believe the first statement, how does that help you? Changing the way we think is great and all, but you need results. You might get the warm and fuzzy from this change, but it in itself won’t produce the results you are looking for and it won’t unless you put some meat on the bone, some grease on the skids, some uhh…what you really want is some action right? So, imagine the scenario where you are at sprint planning and your scrum team is choosing their work, the tester says “Wait…i can’t take on anymore work, i’m all full”, but the developers are saying “We have more capacity, we can do more”. What the team decides to do from this point will define them. The successful teams will dive into why this gap exists and work on engineering productivity tasks that over time chip away at these gaps. The less successful teams will just complain about it, because that’s a whole lot easier than actually fixing it. When this happens, they usually want someone else to fix the problem for them. Where’s that damn QA Manager at?
Here’s another common scenario, this happens near the very end of the sprint, when all of the ‘development’ work is done, but only the testing tasks remains. Now we are really in big do do. What are we gonna do now, huh? Well, it’s obvious that your testers suck because the developers are all done with their tasks and only testing is remaining. Agreed? I mean, what’s the problem dude? Get it together, seriously. But, what’s not so obvious is that there were some delays in delivery, you know software is a creative process right, it’s not completely predictable, oh and by the way it takes just way too long to set up your environments and data. But setting up environments is not testing, and setting up data isn’t testing either…what gives? What we have here, is a failure to automate. One of the growing issues is not just with your testing gap, but your setup and configuration gap. If you spend the time fixing that, you are well on your way to removing your bottleneck and being awesome. You seem like the type of person that likes being awesome so keep reading, i have some more ideas on how you can be awesome.
When you get to the point where you are analyzing your delivery pipeline and seeing inefficiencies not just with your people, but with your processes and infrastructure you have elevated your thinking to another level. The ‘old’ ways of thinking are distant memories and you are starting to think like a real Engineer now. When you build software that can be installed and configured easily, when you can spin up your environments quickly and predictably, when your data is reused and not having to be recreated from scratch each time, and when your whole team is writing automated tests and reviewing the failures when they break you have created an infrastructure that supports your processes. Sounds great doesn’t it? Then why the hell are you not doing it? That’s simple, it’s much easier to complain about these things than to make the investments to improve them, which leads me to a final point…
The folks you hire and train should be smart, creative and highly motivated people. When your testers understand how to read and write code, they gain the respect of their team members, when they can write code they can create tools that make them more productive. When developers test their software and write automated functional tests, they gain the respect of their team members and they are able to jump in to help test near the end of the sprint and still maintain the expected quality levels for your product. So, at this point, you either already knew this, or have heard this before…but if you are not doing this on your scrum team, then you may need to question how smart, creative or motivated you and your team members are. Don’t worry, if that sounds harsh, I apply my team and team members to the same criteria so what’s fair is fair.
So kids, in summary, what we learned today was:
1. Chip away at the culture of ‘QA” being some other teams responsibility
2. Identify your true bottlenecks, not just the stage where the bottleneck exposes itself.
3. Create an infrastructure that supports your processes
4. Hire and train smart, motivated and creative people
Oh, and if you have tried all of the above and you still have issues, fire your ‘QA’ Manager because he probably sucks and is not convincing enough…
Note: No scrum teams were harmed in the making of this blog post.
I had a discussion about different test automation design considerations with another tester the other day and thought I might share some of what we talked about. I’d love feedback on what you think are good/bad ideas and also what you think is important that I didn’t cover. Also, stay turned as I’m breaking this up into multiple articles. As an aside, i’m intentionally writing this in psuedo code so it is mostly language agnostic and mostly based on the idea/concepts. However, at some point I may amend this with some example code or post it to github when I have some more time.
For some background, the tools and framework we are using:
Selenium Webdriver, Java, Eclipse
One of the first things I do before I start automating is think about the context, if you are starting from scratch (no existing test automation for the app) and the site functionality is already built, I start with automating the most used/critical functionality, for example, you might need to automate the Registration, Login, My Account, Forgot Password, etc…
If you already have test automation for the site, then you will hopefully have created some test automation patterns that make it easy to extend your test automation scripts. Some of the design considerations I am talking about here will make it easier for this to be done
When creating the tests I tend to write my tests from the ‘Scenario’ perspective, however i’m not currently using any BDD framework to create these, but I find that writing tests from a User Scenario perspective is very helpful because you are modeling tests after User Behaviors and thinking of real people who are using your site.
What that means for this test site is that I can have New Users with Promo Codes (i.e they get discounted), New Users without Promo Codes (no discount), I have New Users that don’t like to provide ALL of their information during signup, and others who put everything in, I have Administrators and Operations personnel that need access, and it is from this vantage point that I start to create my automation tests.
I almost always start off with creating a new user first, for the simple reason that the Registration test code is usually a dependency and can be used to chain together other test Scenarios which will be demonstrated below.
I’ll create a test class and call it RegisterNewUserTest()
This class will sort of act like a driver class (think back to CS 101), which is really just a series of mini-workflows and the Registration process abstracted is really 3 different pages/screens (i.e enter demographic info, enter security and billing information, verification & display dashboard)
Since a Product Owner could easily change this structure around, putting some of the information from the first page onto the second, and vice versa, we want to have a design that makes it so that you make as little changes as possible when this occurs, also, if they add a fourth page then we can simply just make another page and put all of our logic in there. So the basic structure starts to come into view
demographicPageTest() ; //all the changes that deal with the demographic information goes here
//fourthPageTest(); this could be added very easily into the workflow if needed
The other common concern is about test data. When you are testing a web application you don’t typically want the test data being created inside the actual test code, that’s because it will make it more difficult to edit or makes it such that if you wanted to run lots of different data through in a loop fashion to test the same test with different data you can not accomplish this. Here’s a use case I tend to think about, if I’m registering a new user, I might have a user from California, a user from Oklahoma, male and female users, users with different billing information. From my experience you can create separate tests for these scenarios, but it’s been more helpful to create different test data that can get fed into the tests.
What i’ve usually done is to create some associated test data in XML, but i’d imagine you can use JSON, Excel, or any other format you prefer.
I might call the test data something similar to the test class I already created, that way it’s understood that this data is relevant to this test class, let’s call it RegisterNewUserTest.xml. Inside this xml would be a series of different types of users. Alternatively, one of the other methods I like is just describe the data that is in the file which is to say something like MilitaryStudentsWithPromo.xml. This file would only contain data that matches the description. Either of these methods is okay, the last one is more descriptive such that if the test fails when it is processing the MilitaryStudentsWithPromo.xml data it is better context on why the test might have failed, i.e Maybe you have a problem with only the Military workflows in the system?
So here is how you might populate the test data in the xml, this should be done with some structure that way you can call up specific pieces of data in the code, OR support the ability to loop through the test data if you want to run different data combinations into the test
Notice that we have two users, with similar structure, but different data. You can use this structure to change the outcome of the test and check for negative scenarios like what if I put a username with invalid characters, or a City that doesn’t exist in the drop down. If this data had lived inline with the code it starts to limit your options on dynamically changing the test all without recompiling the code
This test data can now be passed as reference data to the class / method signatures
In Part II i will go over the following
Refactoring (Dependencies inside of the workflows, think a testers version of dependency injection)
Should you delete the test data or not? If your not deleting the data, then you need unique log in creation information
Should you be using assertions or logging?
One of the patterns that I think deserves more exploration is the PageObject pattern, after doing some research on this, I think this is a complimentary technique that would fit into the design I have presented pretty well, but since I have not tried it yet I’ll have to get back with you when I do.
I’ve actually written a test automation library very close to what I’ve described here and it was a great success for the following reasons:
It was easy to read and follow
It was easy to add new changes (if adding a new test you can reuse alot of the tests that were created by calling them and chaining together another test)
It reduced how much new code needed to be added because it was based on modeling the web app and putting that into reusable classes
I’ve been loosely working with SoapUI for over a year now and thought I would share some of my thoughts on what I think about it so far. Overall, SoapUI Pro is a great tool that has made my testing team more productive and simplifies many of their common testing tasks. Here are some items I would like to see on the roadmap or find out if/how others are working around these gaps. Also, since i’m by no means an expert, i’m willing to stand corrected on some of my claims here. I’d love to be proven wrong on these painpoints so i can improve my team’s productivity.
SoapUI Pro needs better support for version control systems (i.e subversion).
- SoapUI Pro should allow you to do a check-in directly from the tool, and also have merge support so test cases, suites or projects can be merged in painlessly. Currently, testers/developers have to leave the tool and make the check-in from a secondary tool. This is inconvenient and not helpful to building a culture of continuous check-ins which support the goal of continuous integration. It’s my understanding that there is an eclipse plug-in for SoapUI but not SoapUI Pro. If this is the case, that does not make sense to me (since i’m a paying customer). I would like to be shown otherwise but I have not been able to find info suggesting that an existing plugin has the Pro features.
SoapUI Pro has some usability issues that need to be flushed out.
- For example, where is the undo feature? If you make a mistake in SoapUI you are screwed (hope you checked in your work, see previous bullet point)
- Making changes to properties does not necessarily propagate down the way you would suspect. My best example of this is in the JDBC Connections tab you can edit the connection string which could be used by 1 or more test cases. Once you edit the connection string directly you would think all the test steps would get updated but you would be wrong. There is an additional step of ‘configuring’ the connection that needs to be performed that would make the test cases ‘aware’ of the change
SoapUI Pro needs better error messaging when running from the command line.
- In our case we use have a Jenkins job which uses a python wrapper to execute the test runner from the command line. If/when tests fail it is difficult to determine why they failed. It’d be nice to see what service point it was pointing to, what database it was trying to connect to but instead it usually just lists a whole bunch of xpath errors. I’m exploring working around this on our end by supplementing the test steps with better logging information, but similar to junit or other frameworks you should have the ability to plug in to a log4j library and get useful error information more easily
SoapUI Pro needs to flag disabled test steps as ignored or skipped similar to Junit style when reported on in Jenkins
- This will allow the testers to have visibility to what is disabled and conduct test maintenance more easily.
SoapUI’s support for groovy is excellent
- This is a very powerful way to expand the capabilities of the tool. Well done! Although adding Python support would be even better 🙂
Overall, I’m a huge fan of the tool, it’s hard to find a tool better suited to testing web services that allows test cases to be maintained in an organized library, is easy to get up and running with minimal training and let’s you test web services end to end.
- “It works on my machine”
- If the Developers help test the Software, then what are the Testers going to do?
- “It’ll only take me a few minutes to write the code”
- “It’s a 1 line code change”
- “Don’t worry, i’ll explain how it works later, it’s pretty straight forward”
- During the last day of the Sprint — “I have time to bring more work into the Sprint, but i’m not sure if there is enough time to test it”
- Add functionality that was not requested
- Make a change to a part of the system that was not planned, and then don’t communicate it to the Tester
- Talk to the Product Owner without a Tester present, and then misrepresent the business requirements
- “That’s an implementation detail”
Here are some of the things I do when testing in a Sprint that I have found helpful. Consider them part of the recipe that contributes to a culture of quality, but I would love to see some comments or experiences from you that will help make this recipe complete.
Test the Process – are we doing things that make sense?
- My basic assumption is that a two week sprint is not a long time, a plan that takes you 2-6 hours to create during planning sessions should be able to hold up pretty well until the end of the sprint.
- Pre-planning is critical, do you have lead times on hardware / test environments / external software dependencies, do you actually have commitments from key personnel who are expected to be a part of the sprint
- Are we breaking down silo’s? Business and Technical decisions that happen amongst only two people OR among the same functional disciplines may mean that information does not flow to the individuals that need to test it.
- I like to encourage just enough design up front to identify a general approach to solving some business problem, but not so much detail that it hinders exploration and learning
- Are me or my team members impediments or blocks actually being removed or is lip service just being given to them
Test the Business Value and Acceptance Criteria – are we building the right thing?
- Do the backlog items have the right level of detail in the acceptance criteria? When i’m testing, i tend to advocate breaking down work into chunks as small as possible, and as small as I can get away with. I find that when stories are too big, it leaves large gaps in the acceptance criteria, AND it masks many of the dependencies that need to be flushed out in order to deliver value
- Did we ask enough questions about the “why” so that we can challenge assumptions and present alternatives that still accomplish the goal
- Are we vetting information received from the technical experts and solution implementers through the Product Owner when it impacts the business?
- If nothing else is automated, the acceptance criteria should be the exception.
Test the Delivery – are we building in the right order? Incremental and Iterative
- Dependencies matter, and so do priorities. When the dependencies matter, lay out the strategy of execution order. When you walk away from planning, having a clear plan on your incremental approach, you should consider what iterations are required to deliver business value
- When teams are working on too much all at one time, it may seem like they are accomplishing alot, but focus does not just apply to having dedicated resources, it also applies to what people are working on, and when they are working on it. Typically every team member working on different stories at the same time is not indicative of characteristics of a team, nor is it a characteristic of focus
- Testing is one of the most critical aspects of making a Sprint successful, yet it is amazing how little is understood on how to invest in this area to bring alignment
Test the Technology – are we building with the right tools and infrastructure
- Internal quality should not be negotiable if the team thinks it is necessary. It’s important that tests are being developed starting from the Acceptance Criteria / Examples all the way down to ‘SQLUnit’ tests verifying stored procedure functionality and ‘shunit’ verifying shell scripts.
- I like to encourage Developers to run some of the manual tests, as well as create api / integration tests so that they can have visibility to how technical decisions either contribute to testability or lack thereof.
Test the Measurements – Are we vetting our measurements and are we interpreting them correctly, do we act on them accordingly
- I like tracking business value instead of tracking hours. How many stories have been completed at a given point in time in the sprint. Is the user story i care the most about actually at risk of being completed? Burning down hours is great, but hours don’t trace back to business value as well as user stories do.
- Talking about stories instead of hours helps reinforce the focus around business value and outcomes as well as provide better context for decision making
- I bring up uncomfortable topics in the retrospective as if it is the last time I am going to be heard by everyone AND I press for at least one or two of the action items or improvements be tried out in subsequent sprints.
Share your thoughts