Today, we’re going to continue our ParkCalc automation excursion. We will take a closer look on the second test in the provided test examples, the keyword-driven format, and see how we can improve it. Please note that I added an update to the previous blog entry showing that we can improve the test even more by extracting the date ranger into meaningfully named variables – just as Dale Emery did in his article Writing Maintainable Automated Acceptance Tests.
Continue reading ParkCalc automation – Refactoring a keyword-driven testCategory Archives: Weekend Testing
ParkCalc automation – Refactoring a data-driven test
Over the weekend I introduced into ParkCalc automation. Today, we will take a closer look on the third test in the provided test examples, and see how we can improve it. Before I do this, I will point you to two great articles from Dale Emery. The first is tenish pages piece where he walks through a login screen. Uncle Bob showed the same example using FitNesse with Slim. In the second he describes a layered approach to software test automation in a very well manner. Together with Gojko’s anatomy of a good acceptance test this gives us some picture where we should be heading.
Continue reading ParkCalc automation – Refactoring a data-driven testParkCalc automation – Getting started
This week Gojko Adzic wrote about the anatomy of a good acceptance test. After having read his elaboration, I remembered how I came up with the preparation for the EWT19 session some weeks ago. We used RobotFramework to automate tests for the Parking Lot Calculator that we asked Weekend Testing participants a few weeks earlier with manual Exploratory Testing. To get testers started we provided them with three examples that I prepared before the session. We then asked testers to automate their tests for the ParkCalc website based on one of the examples we provided. Here is my write-up how I came up with the examples, and what I had in mind.
Big Test Design Up-Front
In yesterday’s European Weekend Testing session we had a discussion upcoming on whether or not to follow the given mission. The mission we gave was to generate test scenarios for a particular application. During the debriefing just one group of testers had fulfilled this mission on the letter generating a list of scenarios for testing later, the remaining two had deviated from the mission and tested the product, thereby providing meaningful feedback on the usefulness of the product itself. Jeroen Rosink already put up a blog entry on the session. He mentions that he define his own mission, and that it’s ok to do so during his spare time.
As mentioned I made my own mission also. I believe I am allowed to it because it is my free-time and I still kept to the original mission, define scenarios. for me the questions mentioned in the discussion were a bit the scenarios.
Of course, Jeroen is right, and he provided very valuable feedback in his bug reports. But what would I do at work when faced with such situation? Should I simply just test the already available application? Or should I do as I was asked? Well, it depends, of course. It depends heavily on the context, on the application, on the project manager, on the developers, on your particular skill level, maybe even on the weather conditions (nah, not really). This blog entry discusses some of these aspects I did not want to include generally in the chat yesterday.
Model building
One week earlier, Michael Bolton attended our session. He explained his course started with building a model of the application under test. Then he started to exercise the product based on that model thereby refining his own mental model of the application.
Jeroen had picked this approach. He also built a mental model of the application and went through the application with the model in order to refine it. On the other hand, Ajay Balamurugadas had build his model and translated that model into test scenarios completely.
To be clear here, both approaches are reasonable. Indeed, knowing when to use which is essential. The software engineering analogy teaches us, that we have to make decisions based on trade-offs. The trade-offs at play here are model building (thinking) as opposed to model refinement (doing). The more time I spent on thinking through the application in theory, the less time I have during a one hour session of weekend testing to actually test the application. On the other hand, the more time I spent on testing (and the more bugs I found by doing so), the less time I can spare to refine the model I initially made. Knowing the right trade-off between the two is context-dependent. I summarized this trade-off in the following graph.
In software testing there are more of these trade-offs. You can find most of them on page four of the Exploratory Testing Dynamics where exploratory testing polarities are listed. Thanks for Michael Bolton to point me out on this.
Information
Basically, the main distinction between the two missions followed is the fact, that Ajay used all his imaginary information available to build the model for test scenarios, while Jeroen questioned the available product to provide him some feedback on the course. While, we may not have an application available to help us making informed decisions about our testing activities, the situation in the weekend testing session was constructed to have a product in place, which could be questioned.
Indeed, for software testing it is very vital to make informed decisions. The design documents, the requirements documents, the user documentation rarely satisfies our call for knowledge about the product. Interacting with the product therefore can reveal vital information about the product and its shape. Gathering information in order to refine your model of the application at hand. Of course, for very simple applications or for programs in areas where you are an expert, this may be unnecessary. Again, the contextual information around the project provides the necessary bits of information about which path to follow.
Professionalism
So, professionally Ajay and Jeroen did a great deal of testing. The key difference I would make at work is that I would inform my customer on the deviation from the mission. There might be legal issues, i.e. a call to follow a certain process for power plant testing for example, that asks to follow the approach on the letter. Negotiating the mission with the customer as well as proposing a different mission when your professional sense calls to do so, is essential for an outstanding software tester. Deviating from the original mission is fine with me, as long as you can deal with the Zeroth Law of Professionalism:
You should take responsibility for the outcome of every decision you make.
Since we’re more often than not the professionals when it comes to testing software, we need to inform our customer and our client about deviations to the original missions. We have the responsibility to explain our decisions and to make a clear statement that it’s unprofessional to deliver non-tested software, for example. Of course, they might overrule your decision, but by then they’re taking over the responsibility for the outcome themselves. Of course, just because you lost a fight, this does not mean, that you should give up to raise your point and lose the battle.
Last, but not least, I hope that the other participants, Vijay, Gunjan Sethi, Shruti, do not feel offended, since I just mentioned Jeroen and Ajay here. They also did a great job of testing the product, of course.