Category Archives: Agile Testing

Testing inside agile development cycles

Agile Practices in a Traditional Environment

Agile Practices in a Traditional Environment from Markus Gärtner

I put up my slides to my presentation at the Agile Testing Days. It was a very tough talk for me. The first conference presentation I made. Before the presentation I was very nervous, but I had a good feeling afterwards. In addition I realized that I will have to work on my entertaining skills for future presentations. Among the things that I just verbally mentioned during the talk – there was no video taken from the talk – are three more successors of our work. The first one is the outcome that over the course of our work a colleague transitioned from the testing group to the development group as one outcome. Another thing that I mentioned is the fact that I was able to learn Java programming sufficiently enough to contribute to the test framework FitNesse over the course of this year. Third recently I paired with a developer on fixing a bug in the production code. I noted the bug when it was first filed, wrote a failing acceptance test for it, and decided to help the developer with the fix, since I would have been blocked otherwise. I showed him how to rewrite the rather complex if-then-else chain the code showed up with – not covered by fast-feedback unittests – and afterwards we fixed the bug and delivered the fix.

Since I know that it will be hard to understand anything from my rather condensed presentation style format, I also decided to put up the nine pages of paper I wrote. You can find the paper as a pdf here. If you attended my presentation and are looking for more in-depth knowledge of what we did, take a look into it.
The paper walks you through an application of Agile practices in a traditional environment, where a small group of testers used the practices to succeed with converting their automated test cases to a maintainable new automation approach. The bibliography section will also conclude with the book references I gave.

Thanks to Matt Heusser, Gojko Adzic and Mike Scott on the presentation, Lisa Crispin, Brett Schuchert, Stephan Flake and Gregor Gramlich reviewed the paper, thanks to them, too.

My Definition of Done

Over the course of the Agile Testing Days I noticed that the definition of “Done” is troubling for Agile teams. It was not only raised in the Tutorial I took, but also on several key note speeches and there were several presentations that also dealt with it. Personally I was wondering why. During the last three days I thought “Done” simply means that you can make money with it. Sounds like an easy definition, isn’t it? If your customer or product owner is not willing to pay money for the user story implemented, it’s not “Done”. Period.

But today I came up with another, easier definition of “Done”, which was strikingly easy to realize in the end. For the definition let me put together several influences from Elisabeth Hendrickson, Adam Goucher, Kent Beck, and many more to a remarkably easy combination.

Elisabeth Hendrickson told me that “Done” simply means implemented, tested (or checked as Michael Bolton defined it) and explored. Very easy, isn’t it? But there’s more to it. Implemented means it is designed, and written in probably some programming language, right? Does this make sense? Considering my visit at the Software Craftsmanship Conference in February I noticed that it seems to be accepted – at least for the crafts people I met there – that implementation means to do it using Test-Driven Development. There was no discussion and arguing about this, it was simply taken as a fact and it was done in a TDD-style. But, remembering back Kent Beck TDD means Red – Green – Refactor, right? A three part meme, oh another one. Great.

The next term in Elisabeth Hendricksons definition of “Done” is tested (or checked). Considering current approaches to get the customer’s requirements using an Acceptance Test-Driven Development style, I would like to coin tested to mean, that we agreed on examples to implement and these examples serve as acceptance criteria in an executable manner. So, actually tested here means, that the code was able to pass the acceptance tests which were agreed upfront onto and were elaborated maybe by the testers on the project to also contain corner cases, which were missed. On Monday Elisabeth Hendrickson taught me, that ATDD can be thought of as Discuss – Develop – Deliver. Gojko Adzic corrected this over twitter to Describe – Demonstrate – Develop. But what do we have here? I claim that tested (or checked) here refers to ATDD style of development, which is itself again defineable as a tricolon itself. So, wrapping up, we have

  • Done is Implemented using TDD, Tested via ATDD and Explored
  • Implemented using TDD is Red – Green – Refactor
  • Tested via ATDD is Discuss – Develop – Deliver

(Forgive me Gojko, but I find Elisabeth’s definition more intuitive to remember.)

Oh, I got one more. Explored clearly refers to Exploratory Testing. It actually might be a coincidence, but Adam Goucher came up with a definition of Exploratory Testing today in a tricolon manner, too: Discover, Decision, Action. Sounds reasonable to me. During Exploratory Testing we discover information about the product which we previously did not know. Based on the information we decide what to do about this. Michael Bolton uses to ask me here: “Problem or not a problem?” so that I can decide, what to do next. Inform the next test executed about what to do. After that we take the next reasonable action based on the information we just recently found out about our product. To make the terminology more cohesive here, I propose to coin explored to mean Discovery – Decide – Act.

So, to wrap it up, just like we have Practices, Principles and Values in Agile, we learn using a Shu – Ha – Ri fashion (thank you Declan Whelan for saving the Agile Testing Days for me by mentioning it), we can define “Done” in this same manner:

  • Done is Implemented using TDD, Tested via ATDD and Explored using Exploratory Testing
  • Implemented using TDD is Red – Green – Refactor
  • Tested via ATDD is Discuss – Develop – Deliver
  • Explored using Exploratory Testing is Discover – Decide – Act

Agile Testing Days Berlin I – Acceptance Test-Driven Development

Agile Testing Days Website
Over the past three days I was able to attend the Agile Testing Days in Berlin. It was a great conference and I met up with very interesting people, attending really great talks and keynotes. Personally I hope to get back there next year. I wouldn’t have thought such a great event could take place in Germany with such brilliant people from all around the world attending. Jose Diaz and Alex Collino did a great job putting up the conference. Here is a series of reflecting write-ups about certain aspects of the conference. I decided to bring them up one by one just as I experienced the conference and have a short wrap-up in the end. The first entry in this series is about the tutorial session I participated in on Monday.

Continue reading Agile Testing Days Berlin I – Acceptance Test-Driven Development

August of Testing

This blog posting will serve as a catch-all for the web-pages I left open on my tabs during the last few weeks. In case you’re interested in what I did since the beginning of August, it might be worth a read for you.

Back in July Mike Bria posted an article on Test objects, not methods. His lesson boils pretty much down to one single quote that I need to remind myself of:

Keep yourself focused on writing tests for the desired behavior of your objects, rather than for the methods they contain.

Over the past weekend Michael Bolton came up with a final definition on checking vs. testing. Lately I argued whether or not we need a new name for testing. Michael Bolton was able to explain to me in this post that we do not need a new name, but a proper definition. This definition in combination with Perfect Software gives you pretty well insights into what to expect from testing and what to expect from checking. Adam Goucher summarized a talk from Chris McMahon at the Agile 2009 conference with the following insight:

The only reason to automate is to give expert testers time to manually test.

Ivan Sanchez stated that there is another area where a term got lately abused with vague definitions. The term is “done” and he calls for stopping redefining and redefining it. Basically my experience shows that the same is pretty much true to a lot of words in our business. Lean, Test, Done, Agile, …. maybe you have more of these for the comments.

On my way home today I thought over a job profile I read over the past month. Basically when transporting it to my company I would include “You know the difference between testing and checking” in the skill list.

Finally Matt Heusser put up another test challenge. What I loved about it was the fact that I could more or less directly relate the lesson to my daily work. So it’s nothing really artificial if you build the bridge between the abstract challenge and your real-world project. Oh, and of course, for a geek like me it was fun. Wonder, if I could do the challenge in the real-world, once. I love lightsabres. And as a black-belt member of the Miagi-Do school I had to take that challenge, of course. Feel free to exchange some thoughts on the challenge with me if you have taken it.

Interview with Gerard Meszaros

A while ago Matt Heusser asked me to help him out with some interviews. Today InfomIT announced the interview with Gerard Meszaros that I helped with. It’s title is The Future of Agile Software Testing with xUnit And Beyond and I’m glad that I could be of some help there. If you haven’t read the book from Meszaros, make sure to order it. It’s not just covering unit testing if you’re reading between the lines.

Developer-tester, Tester-developer

During this week I watched the following conversion between Robert C. Martin and Michael Bolton on Twitter:

Uncle Bob
@dwhelan: If you’ve enough testers you can afford to automate the functional tests. If you don’t have enough, you can’t afford not to.

Michael Bolton
Actually, it’s “if you have enough programmers you can afford to automate functional tests.” Why should /testers/ do THAT?

Uncle Bob
because testers want to be test writers, not test executers.

Michael Bolton
Testers don’t mind being test executors when it’s not boring, worthless work that machines should do. BUT testers get frustrated when they’re blocked because the some of the programmer’s critical thinking work was left undone.

Uncle Bob
if programmers did all the critical thinking, no testers would be required. testers should specifying at the front and exploring all through; not repeating execution at the end.

Michael Bolton
I’m not suggesting that programmers do all the critical thinking, since programmers don’t all of the project work. I am suggesting, however, that programmers could do more critical thinking about their own work (same for all of us). Testers can help with specification, but I think specification needs to come from a) those who want and b) those who build.

Over the weekend I thought through the reasoning. First of all I truly believe that both are right. Elisabeth Hendrickson stated this in the following way:

In any argument between two clueful people about The Right Way, I usually find that both are right.

Truly, both Uncle Bob and Michael Bob are two clueful people. In the conversation above they seem to be discussing about The Right Way, and I believe they are both right – to some degree, in some context, depending on the context. Basically this is how I got introduced to the context-driven school of testing.

The clueful thing I realized this morning was my personal struggle. Some months ago I struggled whether I am a tester-developer or a developer-tester. Again raising this point in my head with the conversation between a respected developer and a respected tester opened my eyes. Three years ago I started as a software tester at my current company. Fresh from university I was introduced into the testing department starting with developing automated tests based on shell-scripts. Finally I mastered this piece and got appointed to a leadership position about one and a half years later. Until that point I thought testing was mostly about stressing out the product using some automated scripts. Then I crossed “Lessons Learned in Software Testing” and got taught a complete new way to view software testing.

The discussion among these two experts in their field raised the point of my personal struggle. Testing is more than just writing automated scripts. Over and over executing the same tests again is a job a student can do. It’s not very thought-provoking and it gets boring. Honestly, I haven’t received a diploma in computer science (with a major in robotics) to stop thinking at my job. Therefore I got into development topics. Since our product doesn’t include good ways for exploratory testing, I came up with better test automation. Basically the tools I invented for the automated tests also aid in my quest to be good as an exploratory tester.

So what was I struggling with? Basically I realized that on my job the programmers get all the Kudos. That’s why I started investing time in becoming a better programmer. Meanwhile I found out that it’s also a fun thing to do. You can see the results whenever you run your programs. This is why I never gave up looking at code, dealing with it. Sure, it’s not what Michael Bolton or Jerry Weinberg mean with software testing. On the other hand it’s what I would like to do.

Basically I consider myself a developer for software test autmation. I have a background in software development and I have an understanding of software testing. (I leave it up to my clients and superiors whether I’m a good one or not.) In the way I understand my profession it’s necessary for me to know about both sides. This is also why I now realized that I am a tester-developer, not a developer-tester. As I just realized my personal struggle comes from the aspect that I am not sure, whether I really ever was a software tester or not. But for sure I have started to become a developer.

In order to conclude this posting with the initial discussion from the two experts in their fields, there is one thing left to say. As a software tester you may choose to become a developer of test automation. Robert Martin refers to these kinds of testers. On the other hand as a software tester you may choose to become a tester who applies critical thining. Michael Bolton refers to these kinds of testers. Whether or not to choose one path over the other may be up to you.

Overview of Agile Testing

In the just released July issue of the Software Test and Performance Magazine there is an article from Matt Heusser, my mentor in the Miagi-Do School of Software testing, and Chris McMahon introducing to the most basic terms surrounding Agile Testing. Before they both wrote them down, I was able to provide some feedback on it. It seems to be enough feedback to get a mentioning at the end of the article. Basically I’m pleased to be able to provide my help.

Since I mentioned the Miagi-Do School, I have to make clear that the term school is not meant in the Kuhnian way. James Bach pointed out to me that he uses the term school regarding the five schools of software testing (Analytic, Standard, Quality, Context-driven and Agile) as a school of thought in such a way. This means that to adapt a mindset. Like James put me out, there are few circumstances where being driven by the context is contra-productive and made me think about three such situations. By the way: Can you think of three situations where being driven by the context of the situation is unnecessary?

From my understanding that I got from Matt’s Miagi-Do school it is not to be understood in the Kuhnian way.

My Renaming attempt

After a discussion on the Agile-Testing mailing list, I decided to give up the proposal to rename the Agile School of Testing. Erik Petersen put it in such a good way, that I fully agree with, so I decided to quote him here:

The schools as defined by Brett in soundbites are:

Analytic School
sees testing as rigorous and technical with many proponents in academia

Standard School
sees testing as a way to measure progress with emphasis on cost
and repeatable standards

Quality School
emphasizes process, policing developers and acting as the gatekeeper

Context-Driven School
emphasizes people, seeking bugs that stakeholders care about

Agile School
uses testing to prove that development is complete; emphasizes automated
testing

I see no evidence in those descriptions that the Agile school has a monopoly on examples. All of these schools choose examples to demonstrate that a system appears to deliver their interpretation of functionality at a point in time, with differing degrees of attention to context and risk. I believe the Schools idea was originally intended to describe groups who tended to favor their ideas (dogma?) over others and focused mainly on functional testing, and when the original Agile School was named, it claimed to be replacing all the other schools. This has since changed considerably, and with new techniques such as Mocking and Dependency Injection and a focus on refactoring (CRAP metric anyone?) I would argue that Agile is much more about design and development aimed at simplicity (YAGNI), of which automated testing is only a part, rather than a specific School of (functional) testing. As I have said before, schools tend to manifest themselves in organizational culture and IMHO are relevant for discussion purposes only. Testing can involve many ideas, some of which are typically associated with schools, and depending on context and risks, testing can draw from all of them. My 2 cents.
Part of the problem is what the earliest schools have become, not what they started as. The original articles on waterfall in the early 1970s stated that just going from dev to test in one step never worked and needed to iterate between the two, but that got lost. In the mid 70s, Glenford Myers in amongst all his “axioms of testing” said that tests need to be written before being executed (because computers were million dollar machines and time was money, no longer such an issue) but he also said stop and replan after some execution to focus on bug clusters, and that also got lost. We need to be open to new ideas and weigh them against our current ones, based on their value and not the perceived baggage they bring from a particular school . Enough of the examples! [grin]
So in a sentence, I agree with Lisa’s posts and Markus’s later post about combinations of techniques (quote “Thinking in black and white when thinking of the schools of testing leads to misconception”), but Markus please ditch the Agile school rename attempt!
cheers,
Erik

Beside the remaining very good comments on this topic on the list, the idea of Schools in Testing simply does not care Agile Testers. Here is an excerpt from the discussion I had with James Bach on the topic:

When we speak of schools in my community, we are speaking of paradigms
in the Kuhnian sense of worldviews that cannot be reconciled with each
other.

Basically context-driven thinking helps Agile testers as well, but they don’t adapt a Kuhnian sense of worldviews towards testing. Mainly I am considering whether there is such a thing as a Agile School or not. Bret Pettichord felt like there was, but currently I am not convinced about it. I am glad that I learned a big lesson from great contributors on my renaming approach, and I finally ditch this attempt.