Continuing the series of questions from the CONQUEST 2010 conference two weeks back, we’ll take a closer look on questions regarding standards and methods.
Standards & Methods
Question:
It would be great, if the ordering customer could formulate acceptance criteria during the offering-/analysis-phase precisely and the development company later once the criteria are fulfilled could assume that the software will be accepted. The definition of such criteria should be possible to a customer that just has limited time and maybe even not an education in computer science.
Which possibilities exist to get practical and foremost testable acceptance criteria?
Answer:
Specification by Example is the most common one, Acceptance Test-driven development (ATDD) is another name for it, but many people are currently bothered about that name. The idea is the following: Sit together with your customer several times during the development of the software, and discuss what the software shall achieve based upon meaningful examples. Over the course of the project, automate these examples as literal as possible, and show your customer not only the results after the iteration, but also let them play with the living software. Over time you build a software system with testable (by definition) acceptance criteria as a side-product.
We’re currently working on a pattern format to note down our thoughts around this, so there is likely more to come during the next months. In the meantime, Gojko Adzic’s Bridging the Communication Gap has more on this. Also Weinberg & Gause in Exploring Requirements – Quality before design, and Gottesdiener in Requirements by Collaboration should serve you very well, until Gojko’s third book might be available filled with stories from teams who successfully applied all of this.
Question:
Testing has the goal to find errors. Acceptance tests does not have the goal to find errors, but to show that the executed test cases work. Is the name “acceptance test” therefore misguiding? Specifically: Should tests for acceptance tests error-intensiv?
Testing should yield trust in the system under test. Therefore error-intensive tests are necessary. If the test cases for the UAT shall not be error-intensiv, since they do not have the goal to find errors, is the UAT then able to build trust for the system?
Answer:
Testing is not a placeholder for missing trust. If your client does not trust your software development capabilities, then there is no way that testing may help you with that. Instead you should try to find out the underlying reasons why they don’t trust you. Did you build software, that didn’t work? Did you work closely together with your customer in order to understand, what they are asking you to build? Did you get early feedback about your progress based upon “working software” rather than screen mockups or worse fake demonstrations of the system? For more on this read Weinberg’s Perfect Software… and other illusions about testing.
Additionally I may want to add an idea I got from Michael Bolton. One thing we know when acceptance tests fail, is that we are not ready to ship the software. Therefore we should maybe rename them to rejection tests, to clear up any misunderstanding surrounding this.
Question:
A while ago a tester from a renowned test company tried to tell that test specification methods are interesting in theory, but are practically useless. During test case identification for large software projects equivalence classes, boundary values and decision tables do not help. The creation of test cases often happens “from the hand to the mouth”, without system or method, often time.boxed and totally dependent on the tester. What do you think is the cause of this?
Answer:
You named it already: “large software projects”. The problem with this is the size of the project also troubles the understanding of the software. For example, if you have a software project planned to last two years, then you know basically nothing about the final product during more traditional test case identification. The timeframe of two years is just too far away to help to create test cases that will be meaningful for the software at hand. Once you slice this down to more tiny iterations, you can oversee the scope of the two or four weeks ahead, and plan properly for these shorter time-boxes. In addition, I have not seen any project lasting for such a time period (two years), where the understanding of the initial requirements did not change over the course of the project. Therefore you want to change your initially created tests quickly once your understanding of the situation changes as well.
This does not mean that you cannot work in such a manner. Good exploratory testers will set up sessions to conduct product risks. They use all the methods described, but in a very concise manner. Test heuristics as well as equivalence classes help the tester also on the shorter time-frame, so you do not give all these techniques up, but your testers set themselves up to tackle the product in the most current manner. In addition, if you use test automation, working in small iterations, getting a common understanding of the shorter stories in an iteration, helps to focus automation on meaningful test cases.
Question:
Are there metrics or mathematical methods to identify test efforts (without percentage from the overall software development effort, etc.)?
Answer:
There are metrics in the CMMi programs, but I don’t believe these help in much regard. First of all these metrics take time to collect data. Since most often testers collect these data, this is surely time not spent testing. Second, once you start to measure something – may it just be for the sake of collecting data to gain an understanding of test efforts – testers will adapt to the system they are measured for, thereby distorting your data and the underlying system. Third, percentage of overall effort is meaningless to the extent that there are software programming activities which take longer to test, others take longer to program. In the statistic mean these don’t count much, but it’s rather hard to give a context-less answer to that question.
Here are just a few of the factors that influence how much effort your tests will take. See Kaner and Bond Software Engineering Metrics: What do they measure and how do we know? for a more elaborate view. Your test efforts depend upon:
- the ability of your test team
- the ability of your programming team
- the amount of information you want to have from your tests
- the amount of illusions you want to keep about your software
- the criticality of the project you’re working on (think about space projects, nuclear reactors vs. twitter)
- the amount of money and patience your want to spend on testing
- the degree of team work your testers and programmers do
- the motivational level of your team (e.g. decreased from screaming or blaming managers)