Wednesday’s morning keynote at the Agile Testing Days came from Michael Bolton, not the singer, not the guy from office space. Michael talked about testers should get out of the quality assurance business.
Michael Bolton started his keynote by saying that he is an Agile skeptic person. He provided his view what Agile means to him. That is the Agile Manifesto as well as the definition form the Oxford Dictionary of English “able to move quickly and easily”. He referred to the first days of Agile when the claim was that testers were no longer necessary. He explained the reason to have testers is foremost to tackle risks we take on by fooling ourselves.
Bolton asked the audience for their definition of Quality and who actually has “Quality Assurance” in their job title. He provided his definition, which is Quality is “value to some person(s)” with the addition from James Bach and him “who matter”. He referred to Linda Risings keynote that decisions about quality are always emotional, not rational.
As a tester, do we have authority to design the product, to write the code, to hire programmers, to research the market? If no, how then can we assure the quality? The answer is that we can’t assure quality, as a tester it’s our job to test. Bolton defined that a computer program is a set of instructions for a computer. That definition is similar to stating that a house is a set of house building materials arranged by house building patterns. He cited from Cem Kaner that a computer program is a communication among several people and computers separated over distance and time. A computer is far more than its code, or the instruction for the device. Quality is far more than the absence of errors in this definition. That means that testing is far more than writing code to assert.
Testing does not come from the computer science initially. People are meaningful in the context of testing. Testing then becomes like anthropology. Testing is “Questioning a product in order to evaluate it” or “Gathering information with the intention of informing a decision”. None of these definitions on testing has the word “assurance” built into them.
He pointed out the difference between testing and checking. Checking if foremost confirmatory, and the decision rule can be affirmed non-sapiently. The problem with this is, that we’re currently asking people to do confirmatory work and calling is testing. By this we tell people to stop thinking. A sapient activity is one that requires a thinking human to perform. This kind of testing can not be scripted. We do not only test for repeatability. A good tester does not just ask “Pass or Fail?”, but instead “Is there a problem here?” Automation can not program a script, investigate a problem that we’ve found, of determine the meaning or significance of a problem. Automation can not decide whether there’s a problem with a script, we need sapient humans to do this. Automation can help us to do these things, but it doesn’t do it for us.
Bolton claimed that Acceptance Tests are examples, though examples are not tests. When an acceptance test passes, it means that the product appears to meet some requirement to some degree in some circumstance at least once on my machine this time. The problems with Acceptance Tests is that they’re set at the beginning of an iteration or development cycle, when we know less about the product than we’ll eventually know when it’s executed. Acceptance tests are examples. They do not (and can not) cover everything that might be important. Acceptance tests are checks, not tests. Acceptance tests don’t tell us when we’re done, but instead we know we’re not finished if they fail to pass. So we should better call them “rejection checks” maybe.
Bolton continued to ask how we know when we’re done? We’re done with testing when there are no more questions that need answering. We’re done developing when the project owner decides that there’s no more valuable work to do. In a healthy environment, these decisions evolve naturally, and in unhealthy environments they evolve artificially.
Bolton continued to question the notion of regression tests. In Agile shops there is a presumption that regression tests are necessary. He showed two definitions of regression. First any repeated test, second any test intended to show that quality hasn’t worsened. But a repeated test might not show that quality hasn’t worsened, even if it has, and a test that shows quality has worsened might be a new test. He presented the Repeat-Them-In-Full Problem. That is automated regression tests make execution fast and cheap, but a test declines in value as its capacity to reveal new information diminishes. High-level checks may not be risk-focused, and they may be unnecessary when there are plenty of low-level checks. Bolton challenged whether regression is actually our biggest risk. Before the Agile manifesto a survey showed that test managers claimed that 6-15% of the discovered problems were regression problems. So, is regression a serious risk we should deal with that much automation? When a building is on fire, what do you report when the building is on fire for the second time? Maybe we should focus on stating “maybe we got a regression-friendly environment here”, and solve the underlying problem of that.
Bolton defined testing to be about exploration, and learning. He gave a summary of what exploratory testing is. Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of his or her work by treating test design, test execution, test result interpretation, and test-related learning as mutually supportive activities that run in parallel throughout the project. He referred to this idea to be in alignment with the values from the Agile Manifesto.
Bolton mentioned that testers are skilled investigators. We are sensory instruments. We hear, see, smell stuff about the program. We get a taste of the program, and provide this information to the project stakeholders. Software development is not much like manufacturing. In manufacturing the goal is to make a zillion copies of something all the same. In software development it’s easy to do this. Instead we have a changing environment to deal with. Software development is much more like design. Designs cannot be checked, they need to be tested. Testing a design is much more like CSI. Investigating the state of the program, using lots of tools that aid in our quest.
If we view testing as a service, we can solve many problems. Bolton brought up the example on “When are we done testing?” referring to a restaurant situation where you ask the waiter “When are we done eating?” Testing as a service helps with requirements in the regard that we can help our customers get rid of ambiguity. We’re helpful in this case.
Bolton referred to other fields which are like testing, like investigate reporters and journalists, anthropologists, historians, botanists, philosopher, and film critics. What can we learn about the past? What do people in the real world actually do? What’s actually going on? What’s the story? Why does this thrive over here, but not over there? What do we know? How do we know we know it? Will this movie appeal to its intended audience? We have to take a look on the social part of testing, and be helpful. Helping people to consider possibilities and helping them make decisions by that.
But can’t testers help with quality tasks? Bolton answered his question that we can help with this regard, sure. With development teams being autonomous and self-organizing, this is natural. Though what we have to keep in mind is that we’re not testing if we do so. We may help fix a bug, we may help our colleagues and get appreciated by this. This is not busy or busybody work. But we still have to consider testing while doing so.
On skills and knowledge we need as testers, he summed up critical thinking, general system thinking, design of experiments, visualization and data presentation, observation, reporting, rapid learning, and programming as well. Let’s learn about measurement, anthropology, teaching, risk analysis, cognitive psychology, economics, and epistemology. If we have a pass/fail ratio we have a ratio of hopes over rumors. Bolton mentioned test framing, that is connecting the mission of testing with what we execute, the heuristics, the models, the types of tests, the design and execution of those tests bound together. Exploratory Testing requires skill, but doesn’t any testing require skill? He use the metaphor of a plane, where the passengers say “Well we wanted to go with a skilled pilot, but they’re just so darned expensive”.
We shouldn’t think about automation vs. manual testing, or developers vs. testers. We are all developers on a team, since we develop the product. Instead of considering “automated testers” vs. “manual testers” we should think about “toolsmith” specialty instead. Testers don’t assure quality. We should stop discussing the role of testers on Agile teams, instead focus on the testing skills we need on our Agile team. Thereby we can stop fooling ourselves.
We are not the enforcer or judge, we are there to add value, not collect the taxes. We’re here to be a service to the project, not an obstacle. We’re investigators.
Excellent summary, thank you for reporting all these sessions! Spreads the information to so many more people.
brilliant talk, well rephrased…
Thanks for sharing so many details of the conf for us non-attendees:-)
Cheers
Olaf