Category Archives: Agile Testing

Testing inside agile development cycles

TDDing, Pairing and overcoming common prejudices

Just read a wise posting from Mike Hill on Pairing and TDDing. Here are two points I want to cite. I hope I will raise your attention for more. If so, read the full article, it’s worth your time.

You can not go faster by shorting internal quality unless you are a noob.

Taking a step back and improving the internal quality is the best decision you can make – and you can make this decision tomorrow, today, now.

The hard part of programming is the thinking…, and pairing and TDDing both improve it.

This is true. During February we had a project struggling for about half a year. With one and a half tester we were able to automate the open-ended tests in just two weeks. On a previous customer one year earlier we had struggled with it for nearly one year. This came from the kickstart in the thinking before starting test automation – which I consider software development in my context.

Apprenticeship

Enrique Comba Riepenhausen shared a video from Dave Hoover on his apprenticeship years in the software industry and on becoming a journeyman. While watching the video I came to the understanding that reading books, blogs and participating in online mailling lists is the track Dave Hoover also started off. This made me confident to be on the right track.

My next step as an apprentice will be the Agile Testing Days conference in Berlin in October. I submitted a proposal of my past year experiences of Agile practices and principles in a more traditional environment. My proposal got accepted and I’m curious about the session. Though my session will be scheduled nearly at the end of the conference, it will be worthwhile to join. Additionally I hope to meet the people I just know from E-Mail and chat by that time.

Example-driven School of Testing – Updates II

Yesterday I decided to bring more light into the darkness of my renaming proposal of the Agile Testing school. Therefore I raised my points on the Agile Testing mailing list. There were some valueable feedbacks and I identified the next points where clarification is necessary. As a side effect I started doubting the phrase “School” to do any good to the discussion.

First things first. One reply made me aware of the left-open question on Bret Pettichord’s foils: Do I have to pick a school? A similar discussion arose while compiling together the Software Craftsman’s Ethic. At one point we agreed that the ethics we wrote together so far should be viewed as one school of Software Craftsmanship. While reflecting on the term “school”, we came to the point where we tried to replace the term with guilde and the like. Answering Bret’s question, whether or not to pick a school, I refuse to do so. The Craftsmanship Manifesto teaches me one thing: The more practices you know and learn and maintain, the better prepared you are for your job at hand. This is the essence of the second statement of the manifesto and the “learn” portion of the ethics:

We consider it our responsibility 
  to hone our craft in pursuit of mastery;
    therefore, we 
      continuously explore new technologies and
      read and study the work of other craftsmen.

This means that I do not have to pick one school. By knowing how to combine together valueable lessons from each of the schools, I have a more powerful set of tools at hand. While replying to James Bach on this I realised that the combination of the Context-driven School and the Example-driven School is quite valueable. Gerald M. Weinberg wrote about this several decades ago:

…putting together two ideas to form a new one that’s better than either of it’s parents…

Copulation is a key to innovation; the opposite of it that gets into the way of creative new ideas is the No-Problem Syndrome. Read on in Becoming a Technical Leader.

My closing note: Thinking in schools seems to tend to thinking in drawers. Refuse to stop thinking while using the schools approach as a tool for reducing the complexity in order to get a suitable model that your brain can understand. You shouldn’t forget that humans are more complex than your mind might handle, though.

Example-driven School of Testing – Updates

After having a discussion on the Example-driven School of Testing with Michael Bolton, I realised that I missed some points. Being a human I truly believe that I am allowed to miss a point once in a while, the critical point is to realise this for yourself.

The first point Michael mentioned was the idea that whenever the acceptance tests pass, then we – the project team – are done. Indeed acceptance tests tell the project team that they’re not done when they don’t pass. This is a something different – check the Wason selection task for a explanation. (Just now I realise that I have given false kudos to James Bach for this quote. Sorry, Michael.) My reply was to view acceptance tests as a goal in terms of S.M.A.R.T.: Specific, Measurable, Attainable, Relevant and Time-bound. You can measure when you might have something to deliver, i.e. when you’re agreed examples from your specification workshops pass. You can measure when you might be ready for bringing the software to the customer and it’s specific and should be – of course – business-relevant. A friend of Michael Bolton put this this way:

When the acceptance tests pass, then you’re ready to give it to a real tester to kick the snot out of it.

This led my thought to a post from February this year from me: Testing Symbiosis. The main motivation behind this post was a discussion on the Software Testing mailing list about Agile and Context-driven testing. The plot outline of the story was Cem Kaner’s reply on that list on a article from Lisa Crispin and Janet Gregory leading to Janet leaving the list. Enough history.

The outline of Testing Symbiosis is that Exploratory Testing and Test Automation combine each other and rely on each other. Just automated testing can lead to problems on your user interface or on the user experiences of your product while just testing exploratory may lead to serious regression bugs. The wise combination of the two can lead to a well tested software product whose quality level is perceived high when shipped to your customer. Here is Michael Bolton’s summary after reading Testing symbiosis:

That said, I think that the role of the tester as an automator of high-level regression/acceptance tests has been oversold. I’m hearing more and more people agree with that. I think your approach is most sane: “On the other hand you can more easily automate software testing, if the software is built for automation support. This means low coupling of classes, high cohesion, easy to realize dependency injections and an entry point behind the GUI. Of course by then you will have to test the GUI manually, but you won’t need to exercise every test through the slow GUI with the complete database as backend. There are ways to speed yourself up.”

1) Automation support.
2) Entry points behind the GUI.
3) Human interaction with the GUI, more automation below it.

These two points seemed to need some clarifications. Feel free to remind me on the left-out corners that are still missing.

Example-driven School of Testing

Some years ago Bret Pettichord defined four schools of software testing: The Analytic School, the Standard School, the Quality School and the Context-Driven School. These ideas were incorporated into the book Lessons Learned in Software Testing: A Context-Driven Approach by James Bach, Cem Kaner and Bret Pettichord. Later on Bret included the Agile School of testing. Some days ago I realized that the name for the Agile School of testing is rather poor. This is the hypothesis and I would like to propose a new name based on the insights of the last few years for the thing Bret called the Agile school of testing. Read up on this in the extended body. Bret’s initial hypothesis was based on Agile Software development being mostly about test-driven development with massive usage of automated regression tests while the code is written. That’s why he included the following core beliefs of the Agile School of Testing:

  • Software is an ongoing conversation
  • Testing tells us that a development story is complete
  • Tests must be automated
  • Key Question: Is the story done?

Some time later Brian Marick wrote down what already was in the heads of many people: Tests in the Agile world are based on examples. Additionally Brian raises the point to renamed test-driven development to example-driven development, since this reflects testing in the agile context more appropriately.

A bunch of techniques ending in ‘dd’ for ‘-driven developenment’ appeared – mainly inspired by the Agile School that started with test-driven development. Among these are Acceptance Test-driven developement, behaviour driven-development or Domain Driven Design (yeah, right, this one does not end in ‘development’).

Back in February I was introduced to the Specification By Example by Gojko Adzic, who transferred the idea of Agile Acceptance Testing based on examples to a process of specification. As pointed out in one of his presentations on FIT and Agile Acceptance Testing, examples elaborate requirements or specifications. On the other hand examples can also become tests – and this is basically what the Agile School of Testing teaches us. Testing is based on noting down examples in order to test that you’re done with developing the feature in your current iteration.

Based on this I propose to rename the Agile School of Software Testing to the Example-driven School of Software Testing. On the other hand this would also make the statement clear that it’s not just about Agile, but rather about examples as Brian Marick initially pointed out. Another benefit of this term would be the distinction towards the Context-Driven School. I raise the point that we must truely understand that there is no either Example-based or Context-Driven, these two can be adapted together or one of them alone of neither of them. From my point of view these two school are able to co-exist and complete each other when applied together.

Testing focus of Software Craftsmanship – Values

Some weeks ago I was first made aware of Software Craftsmanship. By the same time I had run over Bob Martin’s fifth Agile value and Alistair Cockburn’s New Software Engineering. Just today I strived over a topic opened from Joshua Kerievsky on the XP Mailing List: The Whole Enchilada just to find another blog entry from Dave Rooney on the underlying thoughts. For me it seems there is something upcoming, there is something in the air and I would like to share my current picture of the whole from a testing perspective. Even in this week there was a discussion on the Agile Testing group being started by Lisa Crispin’s blog entry on The Whole Team. I decided to organise these sorts in a series of postings to come during the next few weeks. This time I would like to start with values from Agile methodologies. First of all I haven’t read every book on every topic around Agile Testing and Software Craftsmanship so far. There are books on technical questions on my bookshelf as well as managerial books that I would like to get into. Additionally I have not had the opportunity to get to know an Agile team in action – though my team did a really good job moving the whole test suite from a shell script based approach to a business facing test automation tool during the last year. My company came up with a new project structure during that time and I just lately noticed, that – similar to Scrum – there is a flaw of technical factors in the new project structure. The new organisation seems to focus on just managerial aspects of software development without advises on how to gain technical success.

That said, I started to read on Agile methodologies during the last year a lot. The idea of Agile value and principles still is fascinating me. It even fascinates me so far, that I would like to compile a list of factors to notice during day-to-day work. Since Agile methodologies use the Practices, Principles and Values scheme to describe the underlying concepts – during the last year I noticed a parallel to ShuHaRi – I would like to come up with a similar structure. Here are the values from eXtreme Programming and Scrum combined into a single list:

  • Communication

Human interactions focus on a large amount of communication. This item on the list is particular related to the first value from the Agile Manifesto: Individuals and interactions over processes and tools. Likewise Alistair Cockburn introduced the concept of Information Radiators in order to even combine communications and feedback on publicly available whiteboards or flipcharts pages. In the software development business the right way to communicate can reduce a lot of wasted efforts with assumed functionality. Communication therefore as well serves the principle from Lean Software Development to eliminate waste.

  • Simplicity

The case for simplicity arose the first time when my team was suffering from a legacy test approach, which dealt with too many dependencies and a chained-test syndrom. Simply spoken: The tests flawed the simplicity value. Changing one bit on the one function forced changing several tests on the other side of the test framework. Due to the high-coupling nature that was caused by no particular design rules and no efforts spent on paying down Technical Debt, adapting test cases to the business needs was not simple. The high complexity in the test code base was the starting point for this lack of simplicity. By incorporating design patterns, refactoring and test-driven development we were able to handle this lack of simplicity in our currently used approach. Additionally one thing that I learned from Tom DeMarco’s The Deadline is, that when incorporating complex interfaces in the software system you’re building, you also make the interfaces between the humans involved equally complex. This results directly in a higher amount of communication necessary to compensate – a fact that Frederick Brooks noticed nearly fourty years ago in The Mythical Man Month.

  • Feedback

As described before on the communication topic, information radiators which spread informations i.e. about the current status of the build in Continuous Integration or about remaining storypoints on a burndown chart make feedback visible. There is even more to feedback. Continuous Integration is a practice which leads to continuous feedback of repetetive tasks such as unit tests, code inspections and the like. The feedback gathered after each iterations’s end is another point to exercise continuous improvement of the whole team. When feedback is gathered quickly, according actions can be taken by the individuals involved.

  • Courage

Each functional team has to address problems and come up with proposed solutions. In order to state underlying problems you need to have courage to bring up topics that might throw the project behind the schedule. On the other hand staying quiet about these problems, might directly result in Technical Debt and as Dave Smith has put it:

TechnicalDebt is a measure of how untidy or out-of-date the development work area for a product is.
Definition of Technical Debt

There are other topics, where each team member needs to have courage. Here is a non-complete list to give you a vision of it:

  • when organizational habits are counter-productive to team or project goals
  • when working in a dysfunctional team
  • when the code complexity raises above a manageable threshold
  • Respect

Respecting each team member for their particular achievements and technical skill-sets is to my understanding part of a functional team definition. When having a respectful work-environment, a tester is more likely to take the courage to raise flaws of code-metrics or violated coding-conventions. When raising issues of other’s work-habits it is more likely to have a constructive way of solving problems by sending out a Congruent Message. When each team member has the respect on the technical skills of each other team member it is more likely to occur even in problematic situations. In a respectful atmosphere critics are not made in the form of accusation and therefore lead directly to constructive and creative solutions rather than protecting behaviour of the opposing parties.

  • Commitment

The whole team gives the commitment to the customer to deliver valueable quality at the end of each iteration. Without the commitment of each team member to deliver value to their customer, the project success is put onto stake. Leaving out unit tests on critical functions may blow the application once in production. Likewise left-out acceptance tests may lead to a time bomb exploding during customer presentation while doing some exploratory tests. Likewise the team gives the commitment to their organisation to produce the best product they can in order for the organisation to make money with it. The flip-side of the coin will lead to distrust from organisational management. Committing to the vision of the project, the goal of the iteration, is a basic value for each team.

  • Focus

Focus is an essential value behind testing. If you easily distract yourself with follow-up testing, you may find yourself with a bunch of testcases exercised without following your initial mission to find critical bugs fast. Sure, it’s relevant to do some follow-up testing on bugfixes that just occured during the last changes, but if you loose your focus on the particular testing mission, you are also missing to deliver the right value to your customer. Face-to-Face Communication and Continuous Improvement help you keep your focus on the right items, while a simple system supports you in the ability to focus on the harder-to-test cases of your software.

  • Openness

If you would like to provide your customer the best business value possible, it might turn out, that you need to be open to new ideas. Particularly business demands change due to several factors: market conditions, cultural or organisational habits. As Kent Beck points out in eXtreme Programming eXplained:

The problem isn’t change, because change is going to happen; the problem, rather, is our inability to cope with change.

Without openness a tester is not able to cope with the change that is going to happen – may it be to technical needs or just the business demanding the underlying change.

Innovations for the New Software Engineering

During the last weekend I went on a IT professionals seminar. On Saturday evening there was a panel discussion, on which innovations are going to be expected from the IT world during the next years. While hearing the participants and some leaders from some bigger german companies, I got struck. The question that came made me think, what the biggest thing that could happen could be. Some time during the question, for cost-center vs. profit-center thinking it came to me, that the most obvious innovation seemed to be ignored. I decided to share my insight here as well:

Involvement of the customers and stakeholders in the Software Engineering

How do I get to this obvious innovation? It is no solution to gather requirements for several weeks and months, going back into the black-box of programming to build a system after a plan, that gets anyways blown up, deliver partial tested software just to find out, that the 20 percent of value for the actual user of the product are not included. If I want to get a new suit that looks nice, then I go the dress maker and participate and contribute to his work. Of course he can measure my lengths and go into his black-box of dress making and deliver the right dress for me. Really? No, there might be some try-on iterations wortwhile in order to get what I would like to have anyways.

I’m pretty much convinced: Only if customers and stakeholders understand this lesson I learned from Agile Software Development, will this contribute to the Software Enginnering. It is not enough to realize that there is something I have to specify and get something delivered afterwards. Like I have to give a dress maker my input of what I would like to have, like the color of the cloth he should use or the material of it, the Software customer has to participate in the definition of the software system he would like to be built.

Likewise if I go into a shop and ask for a suit, a good salesman will ask me questions in order to get to know, what style I prefer. Like I participate in this gathering by giving the right answers, in the software world it is the same. Software Engineers are not staring into the crystal ball to get to know what the system shall give as advantage to its users. (Though there is a family of methodologies named after the first of the two words, but this is not the intended meaning behind it.)

Likewise it is natural that there are some iterations necessary in order to get the right system. This is true for dress making like for Software Engineering. One has to look over the built product in order to see if it fits the needs. Due to high project lengths of software projects, there be a changed business situation, so that the product does not fit anymore. Maybe I was wrong in first place, when trying to guess what I would like to have – that’s because I as a customer even cannot take a look into my own crystal ball either.

Agile goes a step further. Agile asks the customer not only for participation but for commitment to the product that is being built. This is the case of for Iteration Demonstrations of the working Software each other week. This is the case during the iterations while working about complicated business rules of it. For the dress maker comparison there are no real complicated business rules behind the product he builds. This is the part where craft comes into play. The dress maker learned how to tie together some cloth to get a suit for me. The Agile toolkit does this for Software Engineering to some degree.

Last but not least I would like to denote that heavy customer and stakeholder involvement is not all of it. But at the current time it is the biggest first step to go in order to get real innovations out of Software Engineering. Like Pair Programming and Pair Testing while having the right input at the right time more than doubles the output. This is even true for real customer involvement.

Not every hammer fits all nails

I would like to refer to gojko’s latest blog entry, which made me a bit thoughtful. Here is the actual link I’m referencing: Using FitNesse pages as templates.

At my company we started back around easter to get rid of our old tests, which were

  • build on a script generator, which generated shell-scripts
  • had a “shared fixture” strategy with high interface sensitivity to nearly all DOC of the SUT
  • suffered from test chaining and slow tests

I’m glad I now know all those fancy words after reading through xUnit Test Patterns :-)

We started using FitNesse. Actually I was the only one in our five person team, who had read the book. By coming up with mechanisms to be able to use “Shared Fixtures” and re-use an add-on component of our SUT for test account maintenance, we introduced a good framework, which is easily extensible etc. Since we did not get the support from our developers, which would have been necessary, we had to find ways to compensate for that by introducing unit tests for our generic test helpers, using refactoring and so on.

Parallel to our approach a colleague started to build a framework based on FitNesse as well. At the moment there are

  • complicated long tables
  • high level of details
  • test table chaining

built into this test suite.

Currently I’m facing the situation, that we got the assignment to unify both approaches somehow. Since I’m pretty much convinced by the context-driven school of testing, I believe that both approaches have more or less a good reason to exist. After reading through gojko’s blogentry I think, that screwing up the system by not cooperating should not be a general solution. Since FitNesse has several ways to screw things up, stay aware of the hammer argument:

If I got a new hammer, everything looks like a nail.

Try realising instead that not every hammer fits to any nail.

Introduction to Agile Acceptance Testing

Gojko Adzic has published a very good introduction on fitting Agile Acceptance Testing into the Agile development iteration. The flow reminded myself about the views I got during reading through some example chapters the upcoming book Agile Testing: A Practical Guide for Testers and Agile Teams from Lisa Crispin and Janet Gregory. If you’re searching for a good introduction for agile testing make sure to read the article from Gojko while waiting for the book.

Comments on “The Fifth Element of the Agile Manifesto” and Agile Mainstream

Since I’m personally not able to join the Agile 2008 conference, I was very pleased and thankful for Gojko Adzic‘s regular blog entries on discussed topics he attended to. Today he had an entry on Robert C. Martin about The Fifth Element of the Agile Manifesto, which made me think for a while.

Personally I identified some causes of the effects Robert C. Martin addresses in his talk. Since I read recently Alistair Cockburn’s book Agile Software Development – The Cooporative Game my sights are heavily influenced by it.

Failure Modes

When looking through the remaining human failure modes, I see them addressed in Robert C. Martin’s talk. The remaining three human failure modes might also apply, but I leave the application up to your own thoughts or you might be willing to address them in the comments of this article.

Inventing rather than researching

Rather than reusing the practices from a given methodology it is better to re-invent the wheel new by defining practices from yourself. The problem arises here, that you will come up with ignoring the underlying Agile principles as stated in the Agile manifesto while not understanding them. Considering the Shu-Ha-Ri principle of skill-adaption here, when skipping the Shu stage, you’re not going to reach the Ri stage.

Being inconsistent creatures of habit

First of all does Dr. Cockburn enlist humans being inconsistent creatues of habit as one of the individuals failure modes. Robert C. Martin addresses this with the question for the code coverage of the unit tests at the Agile conference. Even the lack of knowledge on the XP practices in the audience is another sign for this.

Cooperative Game

To quote Alistair Cockburn:

Software development is a (resource-limited) cooporative game of invention and communication. The primary goal of the game is to deliver useful, working software. The secondary goal, the residue of the game, is to set up for the next game. The next game may be to alter or replace the system or to create a neighboring system.

Simply put, I you do not set-up yourself for the next game, you missed the secondary goal of the cooperative game principle. This might be achieved by not coming up with unit tests in the proper depth, this might be caused by relying completely on osmotic communications, thereby ignoring the fact to extend the team – yes, this one I address to myself.

Agile mainstream

Last but not least there is an insight I came up with by myself. The more people are adapting agile processes, the more it is likely that sloppy software developers also get in touch with agile and therefore the agile methods might get bad reputations through the people trying to skip the Shu and Ri level of the agile skill-set. Interstingly I came up with the idea for this cause right before reading Gojko’s Wrap up of the conference, where exactly this is also a part of his critics.

If you still did not yet get the clue, I would propose to read an article in the Better Software Magazine on How to Fail with Agile from Clinton Keith and Mike Cohn. Personally I like the second one the most:

If agile isn’t a silver bullet, blame agile.