Henrik Andersson spoke about Exploratory Testing Champions at the EuroSTAR conference in Copenhagen. He described how he introduced Exploratory Testing in a large company.
Henrik Andersson asked the audience who was working in a large company. He described that a larger company is more resistant to changes in their process. At the company he introduced Exploratory Testing they had many different areas for testing in the telecommunication industry. They never had the time to change and improve their process. Testing was organized around requirements. They measured requirements coverage, and based their decisions to go live based on that. The company had test automation in place, and were heavily working on automating their tests and maintaining the test cases.
Andersson said that they brought in James Bach and Michael Bolton to provide a new way and passion for testing. Andersson was called to take a closer look and introduce ISTQB certification to see what their actual problem after all would be. Andersson stated that they had lots of areas which they knew they didn’t have test coverage for. They knew about the areas that needed new tests, but testers were not allowed to work on that. It took quite a lot of time to build up a test script, and it could days to get a test working. Motivated by this knowledge that the testers had, but were not using, Andersson talked to his stakeholders. But selling Exploratory Testing by showing the client the process and asking everyone to follow it, wouldn’t surely work, too.
Andersson defined Exploratory Testing by stating the definition from James Bach:
Exploratory testing is simultaneous learning, test design, and test execution.
Andersson stated that he introduced Exploratory Testing by letting them test the waters, and trying out the new approach. He found out that the team was excited about the new approach. Testers found new things, and were valued largely. Testers became engaged on testing, and quite passionate about it.
Andersson started with one team, and had a half hour briefing in the morning. They asked them to come up with missions for their tests that they were going to run. Andersson used 90 minutes of focused testing time. He didn’t guide testers during that timebox. Dealing with the questions what to come up without guidance, Andersson asked the testers to come up with new test ideas. At the middle of the day they ran a debriefing how they felt, and which problems they found. They also discussed the obstacles they ran into.
Andersson asked every tester to write down what they felt about the day as opposed to automating test cases. Among them, the feedback he got from this Exploratory Testing pilot project were
- Good with the scheduling otherwise it’s easy to stop after i.e. 30 or 60 mins.
- It’s a little bit hard to find good missions.
- The testers really liked this way of working. They felt like it was really productive when actually performing test cases without writing programmed tests.
Anderssons showed statistics from the first pilot project. The interesting thing he found that during the first day they tried out Exploratory Testing and session-based test management they found problems in their product. Important defects which could have threatened the value of the work for the customer. They thereby found very valuable information about the software they built.
Andersson brought back these results to the steering group. He asked the steering group to decide where to go next. On the one hand they could continue to focus on ISTQB certification further, or go for the Exploratory Testing approach and focus on the skills and competences of the testers actually doing the work.
Challenged by a pending decisions in the steering committee, and facing 8 days left at the company, he introduced Exploratory Testing champions as a new role. Instead of giving them all the answers, he introduced the role, and let the testers find out how to test on their own.
Andersson came up with a schedule mixed with workshops and actual testing. The workshops dealt with mixed topics such as testing heuristics, test environment, and designing the test process. Andersson asked more experienced testers to pair test with other testers at the company. Using this he was making sure that testers made the testing process their own process, and actually followed it.
The output from the workshops were hands-on experience on Exploratory Testing, a process, check lists for Exploratory Test Sessions, and lots of inputs for further testing. Tools improved to support the new process, and they actually got real examples of ET session reports. This lead to a session report template used for debriefing individual testers later.
Exploratory Testing champions had the role to spend on days per week within the projects. In the meantime they learned new things about testing, but also about the products of the company. The Exploratory Testing Champions had the vision to drive, develop, build and explore the craftsmanship of testing.
Andersson explained the process they followed to introduce Exploratory Testing champions. They introduced test sessions, two each second week. They wanted to go through debriefing after the sessions in order to reflect over the course of the testing activities.
Andersson stated that now they are performing rather well, though they could still perform better. Andersson explained that they gave them a good start with the process, that could flourish further practices for Exploratory Testing. They still felt they didn’t have the time to do the testing, but they know that a good mixture of automation and exploration is the key to testing success. One of the things he regrets is that management did not encourage Exploratory Testing after Andersson left. Without the interest of the stakeholders of the testing process, good practices will diminish. New approaches do not stick by themselves. They need interest from upper managment.
Without the interest of the stakeholders of the testing process, good practices will diminish. New approaches do not stick by themselves. They need interest from upper managment.
There’s an alternative: management could ignore the practices, and focus on the results. One of the most damaging things I see in my travels is a tendency for middle managers who are quite ignorant about testing to micromanage it. They require completion (often “successful” completion) of test cases; they count test cases; they demand silly things like one (or two) test cases per requirement. If you want to torpedo the quality of any kind of work, make sure that it’s being managed by someone who doesn’t understand it. Instead, if managers were to require useful information (and in particular, information about problem), rather than test cases and green bars, the value and relevance of the information would tend to rise.
—Michael B.
Markus,
It sounds like a fantastic talk. Thank you for blogging about it and other topics while you’re there. I heard via Twitter that Henrik quoted a statistic that he found ET approaches to be 7 times more effective than their “business as usual” methods. I wasn’t there but could you confirm whether he said something similar to this and, if so, how he measured it?
A note on where I’m coming from: I’m familiar with the arguments against over-reliance on metrics and agree that once a metric becomes a goal, it could become both misleading and potentially damaging. Having said that, if Henrik measured defects found per tester hour in a business as usual environment and compared that to defects found per tester hour in an ET environment and found 7X more defects per tester hour using ET approaches, I would think that would be an extremely relevant data point for Henrik to share with the testing community at large. It’s only one data point, but it if a few dozen other projects measure the same kinds of data, I strongly suspect that the collective data points would point to significantly greater efficiency and effectiveness being achieved when ET approaches are used. I, for one, would be delighted if more testing teams compared two different approaches to testing and shared their results with the world.
This is a topic I’ve been interested in for 4-5 years. I’m surprised how few publicly available data points about such things are available within the software testing community. I’ve personally worked at gathering as many data points as possible comparing combinatorial test design methods to manual test case selection methods. By “combinatorial test design methods” I mean, for example, using pairwise test case design methods to generate test cases, and/or designing test cases using orthogonal arrays and/or using more sophisticated combinatorial approaches to design test cases. FWIW, the data from dozens of such projects I’ve been involved with show a dramatic and consistent improvement in defects found per tester hour when combinatorial test design methods are used. On average, more than twice as many defects per tester hour are being found when combinatorial test design methods are used. More defects are consistently being found overall using combinatorial test design. There was no bias for or against “important” defects being uncovered with combinatorial test design. This >2X improvement we’ve seen on average is well short of the 7X improvement potentially seen by Henrik but it still represents a dramatic improvement over business as usual “design all your tests first by hand then execute them later” approaches that are, unfortunately, all too common in our industry.
More people should experiment, gather data, and share the data and lessons learned with the software testing community. It sounds like Henrik did an outstanding job at sharing his experiences. Thanks again for writing about it.
– Justin Hunter