Yesterday during my keynote at the Agile Testing Days 2012 I said I see a lot of standups, where testers report on their yesterday’s work in the following way:
Yesterday I tested the thing with the stuff. I found some bugs, and filed them. Today I will test the foo with the bar.
I think this is horrible test reporting. While concluding the fifth beta of Elisabeth Hendrickson‘s upcoming book Explore it! I found a few more hints in the same direction. On the same line I will relate good test reporting during the standup to what for example Michael Bolton talks about when it comes to test reporting – we should tell three stories during test reporting:
On my way to EuroSTAR 2012 I was starting to think about the Cynefin model, and landscape diagrams which I know from giving some courses. I tried to relate them to software testing, different techniques, and I was not sure where this could lead me.
I had some exchanges with Michael Bolton, Bart Knaack and Huib Schoots on my early draft, and I wanted to share what I had ended up with. So, here it is.
This year I took all three courses on Black-box Software Testing. Each of them means an investment of four weeks of my time, usually up to 2-4 hours per day. This was quite a blast, and I am happy that I made it through the courses.
One thing that stroke me in the first course was the different uses and misuses of code coverage discussed in the first part, the Foundations course. Here is a short description of things I have seen working, and not working so much.
Two weeks ago the second GATE workshop took place in our offices in Munich. Unfortunately some of the participants couldn’t make it. So, there were the three of us, Meike Mertsch, Alexander Simic, and myself. Although we were a bit low on energy in the morning, the day turned out to be a wholesome day of transpection – or if you prefer, we did a lot of test chat. Here’s what still sticks with me from the day.
It has been about three weeks right now since the second Agile Lean Europe in Barcelona. Although I had the best intentions back then, I promised to write a blog entry about my experiences there, I didn’t do it until now. It seems the best stuff I take away from the break-out conversations in the coffee breaks these days when at conferences, not so much from the session itself. This also holds for the ALE 2012.
Last Saturday we had the first testautomation coderetreat in Munich. Woohoo! This was a kickstart for this new coderetreat format – besides the original one from Corey Haines, and the Legacy Coderetreat format from J.B. Rainsberger. Here is my report from the facilitator’s point of view and with some hints about what I am going to try at different other follow-up coderetreats.
Last week I went to the 2bd SoCraTes conference, the German Software Craftsmanship and Testing conference. We did two days of open space discussions, and we all had great fun. One thing though that caused me some extensive amount of trouble, was the amount of sessions around BDD.
Some time ago, I wrote about the given/when/then-fallacy. But this time was different. Despite the amount of emphasize that BDD puts on the ubiquitous language, I was shrugged by the fact that folks were pointing to different things while talking about BDD: It seems BDD suffers from the same thing that it tries to prevent: having a common understanding about stuff.
I don’t know where this particularly comes from, and I also saw a couple of bad scenarios when it comes to the usage of tools like Cucumber or JBehave. I don’t consider myself a BDD expert, and people pointed out that I do something different around acceptance tests. Still I thought to expose some of my thoughts about some of the examples that I ran across recently – and helped out improving. Here’s my though process on two of these scenarios.
Huib Schoots approached me late last year regarding some contribution for the TestNet Jubilee book he was putting together. The title of the book is “The Future of Software Testing”. I submitted something far too long, so that most of it fell for the copy-editing process. However, I wanted to share my original thoughts as well. So, here they are – disclaimer: they might be outdated right now.
The Testing Quadrants continue to be a source of confusion. I heard that Brian Marick was the first to write them down after long conversations with Cem Kaner. Lisa Crispin and Janet Gregory refined them in their book on Agile Testing.
I wrote earlier about my experiences with the testing quadrants in training situations a while back. One pattern that keeps on re-occurring when I run this particular exercise is that teams new to agile software development – especially the testers – don’t know which testing techniques to put in the quadrant with the business-facing tests that shall support the team. All there is, it seems, for them is critique to the product. This kept on confusing me, since I think us testers can bring great value. Recently I had an insight motivated by Elisabeth Hendrickson’s keynote at CAST 2012.
At times I find quite interesting things in topics that I don’t seem too particularly interested in. A recent example – again – comes from Let’s Test in May 2012. While at the conference, I read through the program and thought that I don’t need to learn anything new from recent trends in bug reporting. Preferring to work on Agile projects, I don’t think I will use a bug tracker too much in the future.
On the other hand, I knew that I had signed up for BBST Bug Advocacy in June. So I kept wondering what I will learn there, and whether it will be as work intensive as Foundations was. I was amazed at some things. This blog entry deals with my biggest learning: for building blocks for follow-up testing – something I think a good tester needs to learn regardless of their particular background.