Agile Practices in a Traditional Environment

Agile Practices in a Traditional Environment from Markus Gärtner

I put up my slides to my presentation at the Agile Testing Days. It was a very tough talk for me. The first conference presentation I made. Before the presentation I was very nervous, but I had a good feeling afterwards. In addition I realized that I will have to work on my entertaining skills for future presentations. Among the things that I just verbally mentioned during the talk – there was no video taken from the talk – are three more successors of our work. The first one is the outcome that over the course of our work a colleague transitioned from the testing group to the development group as one outcome. Another thing that I mentioned is the fact that I was able to learn Java programming sufficiently enough to contribute to the test framework FitNesse over the course of this year. Third recently I paired with a developer on fixing a bug in the production code. I noted the bug when it was first filed, wrote a failing acceptance test for it, and decided to help the developer with the fix, since I would have been blocked otherwise. I showed him how to rewrite the rather complex if-then-else chain the code showed up with – not covered by fast-feedback unittests – and afterwards we fixed the bug and delivered the fix.

Since I know that it will be hard to understand anything from my rather condensed presentation style format, I also decided to put up the nine pages of paper I wrote. You can find the paper as a pdf here. If you attended my presentation and are looking for more in-depth knowledge of what we did, take a look into it.
The paper walks you through an application of Agile practices in a traditional environment, where a small group of testers used the practices to succeed with converting their automated test cases to a maintainable new automation approach. The bibliography section will also conclude with the book references I gave.

Thanks to Matt Heusser, Gojko Adzic and Mike Scott on the presentation, Lisa Crispin, Brett Schuchert, Stephan Flake and Gregor Gramlich reviewed the paper, thanks to them, too.

My Definition of Done

Over the course of the Agile Testing Days I noticed that the definition of “Done” is troubling for Agile teams. It was not only raised in the Tutorial I took, but also on several key note speeches and there were several presentations that also dealt with it. Personally I was wondering why. During the last three days I thought “Done” simply means that you can make money with it. Sounds like an easy definition, isn’t it? If your customer or product owner is not willing to pay money for the user story implemented, it’s not “Done”. Period.

But today I came up with another, easier definition of “Done”, which was strikingly easy to realize in the end. For the definition let me put together several influences from Elisabeth Hendrickson, Adam Goucher, Kent Beck, and many more to a remarkably easy combination.

Elisabeth Hendrickson told me that “Done” simply means implemented, tested (or checked as Michael Bolton defined it) and explored. Very easy, isn’t it? But there’s more to it. Implemented means it is designed, and written in probably some programming language, right? Does this make sense? Considering my visit at the Software Craftsmanship Conference in February I noticed that it seems to be accepted – at least for the crafts people I met there – that implementation means to do it using Test-Driven Development. There was no discussion and arguing about this, it was simply taken as a fact and it was done in a TDD-style. But, remembering back Kent Beck TDD means Red – Green – Refactor, right? A three part meme, oh another one. Great.

The next term in Elisabeth Hendricksons definition of “Done” is tested (or checked). Considering current approaches to get the customer’s requirements using an Acceptance Test-Driven Development style, I would like to coin tested to mean, that we agreed on examples to implement and these examples serve as acceptance criteria in an executable manner. So, actually tested here means, that the code was able to pass the acceptance tests which were agreed upfront onto and were elaborated maybe by the testers on the project to also contain corner cases, which were missed. On Monday Elisabeth Hendrickson taught me, that ATDD can be thought of as Discuss – Develop – Deliver. Gojko Adzic corrected this over twitter to Describe – Demonstrate – Develop. But what do we have here? I claim that tested (or checked) here refers to ATDD style of development, which is itself again defineable as a tricolon itself. So, wrapping up, we have

  • Done is Implemented using TDD, Tested via ATDD and Explored
  • Implemented using TDD is Red – Green – Refactor
  • Tested via ATDD is Discuss – Develop – Deliver

(Forgive me Gojko, but I find Elisabeth’s definition more intuitive to remember.)

Oh, I got one more. Explored clearly refers to Exploratory Testing. It actually might be a coincidence, but Adam Goucher came up with a definition of Exploratory Testing today in a tricolon manner, too: Discover, Decision, Action. Sounds reasonable to me. During Exploratory Testing we discover information about the product which we previously did not know. Based on the information we decide what to do about this. Michael Bolton uses to ask me here: “Problem or not a problem?” so that I can decide, what to do next. Inform the next test executed about what to do. After that we take the next reasonable action based on the information we just recently found out about our product. To make the terminology more cohesive here, I propose to coin explored to mean Discovery – Decide – Act.

So, to wrap it up, just like we have Practices, Principles and Values in Agile, we learn using a Shu – Ha – Ri fashion (thank you Declan Whelan for saving the Agile Testing Days for me by mentioning it), we can define “Done” in this same manner:

  • Done is Implemented using TDD, Tested via ATDD and Explored using Exploratory Testing
  • Implemented using TDD is Red – Green – Refactor
  • Tested via ATDD is Discuss – Develop – Deliver
  • Explored using Exploratory Testing is Discover – Decide – Act

Agile Testing Days Berlin I – Acceptance Test-Driven Development

Agile Testing Days Website
Over the past three days I was able to attend the Agile Testing Days in Berlin. It was a great conference and I met up with very interesting people, attending really great talks and keynotes. Personally I hope to get back there next year. I wouldn’t have thought such a great event could take place in Germany with such brilliant people from all around the world attending. Jose Diaz and Alex Collino did a great job putting up the conference. Here is a series of reflecting write-ups about certain aspects of the conference. I decided to bring them up one by one just as I experienced the conference and have a short wrap-up in the end. The first entry in this series is about the tutorial session I participated in on Monday.

Continue reading Agile Testing Days Berlin I – Acceptance Test-Driven Development

How I would test this – Part II – The refinement

After having proposed the basic idea for Matthew Heusser’s latest testing challenge, I would like to present the refinement based on Matt’s answers. Additionally I would like to make sure to be able to come up with a third part on it after getting back some feedback just before the deadline. In addition please mind that I have been quite ignorant of the answers and reactions on it so far, despite one answer from Philk, which I found funny to read – and mind-opening when comparing to the company I’m working at.

The Five minute challenge

The high level test approach I would like to choose here consists of some main charters:

  • Explore signal sending
  • Explore conversations
  • Explore networks
  • Explore the remains: Drop-down filters, history, RSS feed

Some more explanations will be useful here. So let me try to stress these out in a bit more detail.

Explore signal sending

Some first few ideas I would draft on my notes for the exploratory tests on this item include sending a regular message, sending a message with 140 characters, trying too few, too many characters. Try replying all sorts of messages, noticing if the counter with the remaining characters is reduced properly. What happens if I try to reply to a message that already has 140 characters? Is there a configurable bell sound for notifying me? A visual aid? Is there a visual aid, when the number of remaining characters drops below a certain threshold? Highlighted messages, self-highlighting, private messages, are they really private? Will the background be great or gray? May I delete private messages, too? (Like indicated by the trash symbol in the view) What about timestamping behavior? Are they properly updated? Are they updated online or just on refresh?

After stressing these few tests out, I will be in the position to think of what to automate regarding this feature. The automated tests would include sending a regular message, failing to send a too short and too long message, message replies and private messages (create, read, delete) at least. If the exploration uncovers other hidden risks these tests would be included there, too. I would try to integrate as good as possible in the already existing testing approach, which is probably based on Watir or Selenium, as much as possible. If impossible at all, I would need to come up with an self-crafted driver for the REST API, create users, etc. I would try to get in touch with the developers as much as possible on their help on this.

Explore conversations

Edit a page, add a comment, edit your own comments. Edit my profile, tag my profile, tag the profile of someone else. Delete a tag of my own profile and delete a tag on another profile, a tag someone else set. Check that I can’t edit another user’s edit, check that I can edit another user’s page and that raises an conversation event. Watch a page and have another user edit it, comment it.

Tests to automate include editing a page, adding a comment, editing a comment, profile edit, profile tagging, profile tagging of someone else, deleting tags on own profile, another profile, another user’s tag on any profile. Edit another users page for a conversation event, page watching and having it edited and commented. While I’m at it, checking that I can’t edit another user’s page should be possible, too. So I would basically automated all of the above.

Explore networks

Follow people, unfollow people, follow people and send a message, follow people and check their messages. Conversations were already explored in a different context. Maybe switching networks in between could yield some more information, but is up to the exploration.

Automation should do follow people, unfollow people in combination with sending some messages in between – at least.

Explore the remains: Drop-down filters, history, RSS feed

Open all drop-down filters, check spelling, consistency regarding the interface representations, properly overlay with the underlying web side. Scroll through message histories, older, newer, going back to newest. Subscribe to rss feed and watch some notifications appearing. The wrench symbol indicates some advanced setting on the window in the right corner. What happens if I click there? What can I explore around there? What happens if I resize the window? What does minimize plus restore do to the view? When hitting the close button, while having some typed text in it, is there a question asking me if I want to throw away my changes in the edit box?

That’s pretty much what I can forsee now at the moment so far. The general automation approach is interleaved with the actual activities I would like to do during the iteration. Two weeks are a pretty big deal of time considering this feature, and I doubt it will be the only one in that iteration. Therefore there will be some need to reduce the actual testing of the widget down, so that time permits further testing of other stories. Basically I would try to timebox the activities for the testing – not the bugfixing – around this to two days, if possible. With good knowledge on how to automate tests for this beast, it should be possible, but that will depend on the team. Maybe I could by then also help our developers on the unit tests for this, but as time unfolds there will be more informations on these questions.

How I would test this – Part I – The basic idea

Matt Heusser from time to time comes up with challenges related to testing. The latest one introduces a web application and asks for a strategy to stress it out. Here is my (late) reponse to the challenge.

First of all there are some questions I would need to ask in order to prepare my testing and checking activities for the delivery of business value. Who is the customer here? The customer seems to be a product owner or product manager, but it could also be a real end customer, who wants to get a new business webpage. How do the developers work together with the QA team? Are they ignoring them? Are they using TDD to develop the software? Will the testers be able to walk through the delivered product like a hot knife through ice or will there be a hard time for the testers on the team with a real bugfixing phase? Which process does the Agile team follow? Scrum? XP? Crystal? Some mix? The iteration length is two weeks, which is a pretty fair schedule. Do the testers participate in the planning game or are they kept separate? What about easy access to expert users? Who is available for the clarification of open questions? Is the product owner taking care of this? Is there an on-site customer? Is there a customer proxy? Which equipment and tools do the developers use? Which equipment and tools do the testers have access to? Which tools did they sucessfully use in the past? Is the team new to Agile or did they deliver continuosly over the last five years? Did they have success with the Agile adoption? Which practices do they use? Which practices do they not use? What is going to be delivered? Is a test report necessary? In which form? Which documentation needs to be created? Which documentation can be used as a basis for more informed testing? Which similar product exist on the market? Matt already named a few, but is this application like those mentioned, something completely new? Which features are incorporated into the product as enlightment so that this particular product will outperform the existing ones? Is there any new technology included for the team? Has the team – the developers and the testers – dealt with the technology involved beforehand? How do the binary deliverables look like? Is the webpage hosted on an internal server or is there an installer or even packaged CD going to be sold? Is there a bug backlog, which the testers have to deal with? Are team members assigned to the project on a 100% basis? Are there intervening projects, that might ask for particular specialists from this project? Is the product already available? Is there a successful build at least once a day? More often? Less often? How do the release plans look like? Which overall timeframe is taken and does the testing have to come up in parallel? Is the design test-friendly? Is it possible to test behind the GUI?

Ok, this is the basic brainstorming and the question I would start to be asking in order to make a more informed decision on the test approach to follow. While these questions get answered, let me start drafting the approach, which takes some assumptions on the questions above, but which should be easily to change during the course of the testing.

Based on the problem description and my past experience and the fact that I have to deal with an Agile team, I would start the first steps with timeboxed Exploratory Testing sessions on the product if the testing takes place as a separated phase. Mainly this will deal as a learning curve and as an information gathering process. Ideally this would be done as a pairing session, so that both team members take a similar learning curve. During the initial two to three days there will be a mix-up of learning in ET sessions and preparations for the test automation part to follow.

If testing and development are running in parallel, there is a need for pairing with customer representatives during the first days. For each planned story for the upcoming iteration, I would try to prepare at least one acceptance test per planned story for the iteration, more for the harder business conditions. Of course here is the assumption that it is possible to some degree to test behind the GUI. Some framework for testing the GUI might also be necessary for the UI checks.

So far, this is the high-level idea I can provide. For the particular mentioned business conditions I will follow-up with a blog entry later the other week. Maybe most of my question will be answered by that time so I can come up with some more informed approaches, too.