Wednesday, November 16, 2011

Why Testing everything is an stupid idea...

I am sure you have come across managers and stake holders who says "I want my product to be 100% tested, I can not tolerate less than 100% coverage.". I can only say those folks have lack of appreciation for the complexity of testing. No software can ever be tested completely. It simply is not possible and will never be practical and cost effective. The good news is that we have Mathematics and Statistics to help. I use 3Cs - Coverage, Confidence and calculated risk as a measurability of testing of software that I work on. I will briefly discuss these 3 Cs here..
Defining the quality goal for your product should be the first step. Let's say, we decide our software product will be 99% bug free. Now let's say we are 99% confident that we will achieve that goal. When we set our goal and confidence level, then statistics like random sampling can help us get number of test cases required (99 for a set of 100 input ) . But we know that random sampling does not work very well and it does not guarantee coverage. However we know that defects in software are uncovered not by single input by a combinations of 2 or more input. Pairwise testing helps here. Fortunately, we have some tools that generates combinations for different set of input variables, the methodology better known as Combinatorial test design technique. 
Automated Combinatorial Testing for Software (ACTS) is one such wonderful tool which is developed by researchers at NIST You can feed dimensions and values to the tool and it will generate pairwise or n wise scenario for you..pretty cool !
Pairwise testing alone will not be enough... a good code coverage tool will provide the detailed report on branch, statement and function coverage and ensure that anything important is not ignored. I use Intel Code coverage tool for C++ domain but if your code base is in Java domain, you have plenty of free options available. I usually do post coverage analysis to answer questions such as:
  • Which test case caused the coverage
  • Quality of covered code? - you don't get this from coverage report but from the domain knowledge
  • What is the correlation between block coverage and decision coverage
  • What is the cost effective way of fixing uncovered code
  • Any correlation between uncovered code and defects
finally, as a test leader you should be able to take calculated risk and declare success. Risk taking is a process which involves asking/preparing a lot of questions like these. Once you have answer to these questions you can feed them back to the QA process and guaranteed to have a better product delivered. 
  • Which functionality is most important to the product's intended purpose ?
  • Which functionality is most visible to the customer/user ?
  • Which functionality has the largest financial impact on users ?
  • Which aspects of the application are most important to the customer ?
  • Which aspects of the application can be tested early in the development cycle ?
  • Which parts of the code are most complex, and thus most subject to errors ?
  • Which parts of the application were developed in rush or panic mode ?
  • Which aspects of similar/related previous projects caused problems ?
  • Which aspects of similar/related previous projects had large maintenance expenses ?
  • Which parts of the requirements and design are unclear or poorly thought out ?
  • What do the developers think are the highest-risk aspects of the application ?
  • What kinds of problems would cause the worst publicity ?
  • What kinds of problems would cause the most customer service complaints ?
  • What kinds of tests could easily cover multiple functionality ?
Hope you will share your thoughts on this...

Make Everyone Smile

Hey there! Just wanted to let you know that today is officially National 'Make Everyone Smile' Day! So, consider yourself officially...