After having a discussion on the Example-driven School of Testing with Michael Bolton, I realised that I missed some points. Being a human I truly believe that I am allowed to miss a point once in a while, the critical point is to realise this for yourself.
The first point Michael mentioned was the idea that whenever the acceptance tests pass, then we – the project team – are done. Indeed acceptance tests tell the project team that they’re not done when they don’t pass. This is a something different – check the Wason selection task for a explanation. (Just now I realise that I have given false kudos to James Bach for this quote. Sorry, Michael.) My reply was to view acceptance tests as a goal in terms of S.M.A.R.T.: Specific, Measurable, Attainable, Relevant and Time-bound. You can measure when you might have something to deliver, i.e. when you’re agreed examples from your specification workshops pass. You can measure when you might be ready for bringing the software to the customer and it’s specific and should be – of course – business-relevant. A friend of Michael Bolton put this this way:
When the acceptance tests pass, then you’re ready to give it to a real tester to kick the snot out of it.
This led my thought to a post from February this year from me: Testing Symbiosis. The main motivation behind this post was a discussion on the Software Testing mailing list about Agile and Context-driven testing. The plot outline of the story was Cem Kaner’s reply on that list on a article from Lisa Crispin and Janet Gregory leading to Janet leaving the list. Enough history.
The outline of Testing Symbiosis is that Exploratory Testing and Test Automation combine each other and rely on each other. Just automated testing can lead to problems on your user interface or on the user experiences of your product while just testing exploratory may lead to serious regression bugs. The wise combination of the two can lead to a well tested software product whose quality level is perceived high when shipped to your customer. Here is Michael Bolton’s summary after reading Testing symbiosis:
That said, I think that the role of the tester as an automator of high-level regression/acceptance tests has been oversold. I’m hearing more and more people agree with that. I think your approach is most sane: “On the other hand you can more easily automate software testing, if the software is built for automation support. This means low coupling of classes, high cohesion, easy to realize dependency injections and an entry point behind the GUI. Of course by then you will have to test the GUI manually, but you won’t need to exercise every test through the slow GUI with the complete database as backend. There are ways to speed yourself up.”
1) Automation support.
2) Entry points behind the GUI.
3) Human interaction with the GUI, more automation below it.
These two points seemed to need some clarifications. Feel free to remind me on the left-out corners that are still missing.