Continuing the What you always wanted to know about Testing and Quality Assurance series, we will take a closer on Agile Test Management today. Please note that I consider the term Agile Test Management to be an oxymoron. The team is self-managing in Agile, and there is no dedicated manager role to grant the team enough power to manage itself. This surely needs lots of trust – especially when transitioning from a more traditional environment. but is essential to any team effort.
Agile Test Management
Question:
Does “Agile Testing” always require “Test-driven development”?
Answer:
No, but it helps. That said, Kent Beck describes in the second edition of eXtreme Programming eXplained how the practices inside XP play together, and that you may need some practices before you an bring in others. For example bringing in a Continuous Integration (CI) process does not make much sense, if you are still struggling with automating your build process so that it may be triggered from the CI-system. So, Agile does not require you or your team to work test-driven, be it using microtests or storytests. But it surely helps, since the benefits you get from it, are fully automated unit-level tests as well as fully automated system-level functional tests. Indeed, having worked in a system – FitNesse – which provided that amount of feedback through automated tests, I never want to work in another way. Of course, I have to from time to time (most of the time, unfortunately), but the thing I’m striving to work in is a fully automated system, where I have the immediate feedback, when I broke something unintentionally. Of course, this does not mean, that I will consider the product finished just because all of those automated tests pass, but that’s another thing.
Question:
I am responsible for the tests in my Scrum-Team, but I can’t finish my test during the Sprint. Based on my experience this is most of the time also not possible, since the code is developed until the end of the sprint, or at the end of the sprint bugs are found.
When do tests take place? During the sprint, directly after a sprint, or starting with the sprint, but tests are concluded later (time-shifted)?
I would like to know, how your experiences are, and what the recommended behavior is.
Answer:
First of all, as a ScrumMaster on such a team, I would worry about the team’s definition of done, and whether the ProductOwner is satisfied with the outcome from the sprints. The job of the ScrumMaster would be to make this mis-understanding transparent, and the ProductOwner is responsible to tell the team that he doesn’t consider the stories done if they still have bugs. In addition, the tester on that team seems to be concerned about it, so it surely should be brought up at least during the retrospective. After all, this pattern holds for many Scrum Teams, once empowered to commit to their work on their own, overcommit, and put too much work into one sprint which they cannot hold. This is an issue that a good ScrumMaster course will surely address. Put more abstract, I would consider a story to be done, when it is implemented, tested and explored as Elisabeth Hendrickson pointed out to me at last year’s Agile Testing Days. Based on the description above, the stories do not seem to be tested. Period.
Second, testing takes place all the time on a Scrum Team. Ideally we strive to start with a high-level specification, that we turn into executable examples over the course of the iteration. These executable examples are then constantly run as part of the continuous integration build. For new stories, early on examples are identified, which are then turned into automated tests along with programming these stories down. There are no phases in Agile like more traditional programming and test phases, but the activities get mixed. As described earlier, if there is a problem with getting stories to done based on too few tests, then every team member is encouraged to take on testing tasks to fill this gap, and help the team reach its committed goal. This is what professional teams need to do, since they have a shared understanding of each other’s activities.
Last, the problem with testing tasks being laid off until it is too late, is surely a point that the ScrumMaster should bring to the desk during the retrospective. Seriously, I wouldn’t expect it be dealt with in the first or second retrospective, but I would make the situation transparent to the team, and help them deal with solutions to this. This could mean to bring in a sprint where the pending testing tasks are taken care off from the whole team – meaning that no additional functionality is developed for two or four weeks. Or this could mean, that a programmer is put aside to the tester to help with the test automation efforts – maybe on a rotating basis. This could mean, that the programmers have to take care of the testing related tasks for some time. I hope you get the point by now. As a ScrumMaster I can facilitate the discussion on this, and help the team make a conscious decision about what to do about this problem, but it surely has to be raised – if not during the Daily Scrums or Stand-ups, then last during the retrospective.
Question:
In addition to the last question, maybe technical debt is noticed during reviews. But at this point the sprint is already over.
How can a Scrum-Team deal with this?
Answer:
I see two dimensions based on the term “review” here. The first one is the Sprint Review described as a demonstration of the product increment built during the last sprint in Scrum. At this point, the ProductOwner should reject unfinished tasks and stories as described earlier, put these stories back into the backlog, and have the team decide on necessary process adaptations during the retrospective. Please also note that the ProductOwner might decide to put these stories at the end of the backlog, if the priority has changed in the meantime. This could lead to throwing away code developed during the last iteration, but a working product should exist at the end of each iteration.
The second meaning of “review” here is the more traditional code review and inspection process which also CMMi recommends. Please note that I haven’t heard any of the Agile gurus speak about these sort of reviews before, and here is why. In an ideal world, the team would be used to pair program and develop code with two people sitting in front of a single PC: At this point not only do the two bring in two different perspectives to the just developed code, but there is also an implicit code review built right into the process. Since every decision in a paired set-up is made by both of the persons in front of the PC, the code review is happening right while the code is written. The same holds when a tester pairs with another tester, and also of course when a tester pairs with a programmer. Now, if technical debt is created during a paired set-up, I would take a closer look why, and help the team make the decision on what they might want to do about it.
Question:
How should a test phase for all newly developed Stories of a sprint and maybe regression tests after development within the sprints be tailored? How takes care about overall test cases, the system test? What are your experiences?
Answer
Fully automated regression tests as well as system level functional tests are a by-product of test-driven development and the thing we still call acceptance test-driven development (which might change soon). If a team does not create these, it will surely setup itself up for trouble in the iterations to come. The job of the ScrumMaster is to make this transparent and help the team make the decision on how to deal with it.
Question:
During the Scrum-Sprints there are mostly tests developed for the current user stories.
How does Agile specify and test, what the software shall not do?
Answer:
Not at all, at best. Why? There are a gazillion ways not to do something. A requirement as “The system shall not receive an email” might be understood as “the system does not receive an email at all”, or “the system disconnects from the internet, so that it won’t receive an email” or “the system crashes the operating system, so that the computer becomes unusable”. Any requirements engineer will surely tell that something specifying negatively is not such a great idea in first place.
Instead the currently developed user stories are created as a placeholder for a discussion to take place during the sprint, In this discussion, the ProductOwner and the team will find out constraints for the user story. What are performance constraints for filtering some datasets from the database? What should be found? Are there examples, where the filtering does not find something? The answers to these questions are denoted on the story cards, and later turned into automated tests. So, besides positive tests of what the user story delivers, there are always constraining factors, which are discussed and implemented by the whole team. If you get all this done, you either shouldn’t need to bother about negative test cases, or put a new story in the product backlog for it.