At CAST 2011 Matt Heusser spoke about the context of test automation and how to deal with economics around it.
Matt started with a calculation from CAST 2010. Someone explained that if you automate a test, you can have it run it ten thousand times easily. If you run it in a continuous integration system, then you can run the same test 50 times par day, thereby saving lots of money. Matt explained that he doesn’t believe such easy calculations.
Matt stated that he worked with SocialText for three years. They built there the leading Selenium test automation framework. Matt presented four different variables which can help us model test automation: Cast, features, defects and release timing. With the skills he has to do, he can have a discussion based on these four factors.
Matt explained on his model that in the beginning test automation on most projects is going to slow us down. Costs go up, features are dropped. In medium term the argument is that we are going to find more defects with the tools. In general there are two approaches to test automation. One is to describe exactly how the software is going to look like. This means that every time something changes in the interface, you have to adapt that test. On the other hand you can use keywords to abstract from that. But then the tester has to spent more time on maintenance of those keywords.
Matt showed a video of an awareness test. He argued that any tool comes with inattentional blindness, especially when it comes to software testing and test automation.Referring to Selenium he showed a video of a bug he found in the past week. While deleting an account in a system, there was an error which his blink heuristic could reveal, but a Selenium based wait for this-or-that text to appear didn’t find it. Something went wrong in the application, but was immediately overwritten by a javascript result. Matt explained that what we refer to as test automation often is just computer-assisted test evaluation.
Matt suggested to go back to our organizations and apply the model he provided based on costs, features, defects, and release timing. To Matt test automation based upon domain-specific languages makes sense. But we should keep in mind that test automation is a mean. Matt suggested that automation should drive the application to a starting point for meaningful manual testing to set in.
When you find bugs in your iteration, you should reflect upon them, and seek to find different strategies to tackle that risk. Matt concluded his talk with this.