My Renaming attempt

After a discussion on the Agile-Testing mailing list, I decided to give up the proposal to rename the Agile School of Testing. Erik Petersen put it in such a good way, that I fully agree with, so I decided to quote him here:

The schools as defined by Brett in soundbites are:

Analytic School
sees testing as rigorous and technical with many proponents in academia

Standard School
sees testing as a way to measure progress with emphasis on cost
and repeatable standards

Quality School
emphasizes process, policing developers and acting as the gatekeeper

Context-Driven School
emphasizes people, seeking bugs that stakeholders care about

Agile School
uses testing to prove that development is complete; emphasizes automated
testing

I see no evidence in those descriptions that the Agile school has a monopoly on examples. All of these schools choose examples to demonstrate that a system appears to deliver their interpretation of functionality at a point in time, with differing degrees of attention to context and risk. I believe the Schools idea was originally intended to describe groups who tended to favor their ideas (dogma?) over others and focused mainly on functional testing, and when the original Agile School was named, it claimed to be replacing all the other schools. This has since changed considerably, and with new techniques such as Mocking and Dependency Injection and a focus on refactoring (CRAP metric anyone?) I would argue that Agile is much more about design and development aimed at simplicity (YAGNI), of which automated testing is only a part, rather than a specific School of (functional) testing. As I have said before, schools tend to manifest themselves in organizational culture and IMHO are relevant for discussion purposes only. Testing can involve many ideas, some of which are typically associated with schools, and depending on context and risks, testing can draw from all of them. My 2 cents.
Part of the problem is what the earliest schools have become, not what they started as. The original articles on waterfall in the early 1970s stated that just going from dev to test in one step never worked and needed to iterate between the two, but that got lost. In the mid 70s, Glenford Myers in amongst all his “axioms of testing” said that tests need to be written before being executed (because computers were million dollar machines and time was money, no longer such an issue) but he also said stop and replan after some execution to focus on bug clusters, and that also got lost. We need to be open to new ideas and weigh them against our current ones, based on their value and not the perceived baggage they bring from a particular school . Enough of the examples! [grin]
So in a sentence, I agree with Lisa’s posts and Markus’s later post about combinations of techniques (quote “Thinking in black and white when thinking of the schools of testing leads to misconception”), but Markus please ditch the Agile school rename attempt!
cheers,
Erik

Beside the remaining very good comments on this topic on the list, the idea of Schools in Testing simply does not care Agile Testers. Here is an excerpt from the discussion I had with James Bach on the topic:

When we speak of schools in my community, we are speaking of paradigms
in the Kuhnian sense of worldviews that cannot be reconciled with each
other.

Basically context-driven thinking helps Agile testers as well, but they don’t adapt a Kuhnian sense of worldviews towards testing. Mainly I am considering whether there is such a thing as a Agile School or not. Bret Pettichord felt like there was, but currently I am not convinced about it. I am glad that I learned a big lesson from great contributors on my renaming approach, and I finally ditch this attempt.

TDDing, Pairing and overcoming common prejudices

Just read a wise posting from Mike Hill on Pairing and TDDing. Here are two points I want to cite. I hope I will raise your attention for more. If so, read the full article, it’s worth your time.

You can not go faster by shorting internal quality unless you are a noob.

Taking a step back and improving the internal quality is the best decision you can make – and you can make this decision tomorrow, today, now.

The hard part of programming is the thinking…, and pairing and TDDing both improve it.

This is true. During February we had a project struggling for about half a year. With one and a half tester we were able to automate the open-ended tests in just two weeks. On a previous customer one year earlier we had struggled with it for nearly one year. This came from the kickstart in the thinking before starting test automation – which I consider software development in my context.

Apprenticeship

Enrique Comba Riepenhausen shared a video from Dave Hoover on his apprenticeship years in the software industry and on becoming a journeyman. While watching the video I came to the understanding that reading books, blogs and participating in online mailling lists is the track Dave Hoover also started off. This made me confident to be on the right track.

My next step as an apprentice will be the Agile Testing Days conference in Berlin in October. I submitted a proposal of my past year experiences of Agile practices and principles in a more traditional environment. My proposal got accepted and I’m curious about the session. Though my session will be scheduled nearly at the end of the conference, it will be worthwhile to join. Additionally I hope to meet the people I just know from E-Mail and chat by that time.

Example-driven School of Testing – Updates II

Yesterday I decided to bring more light into the darkness of my renaming proposal of the Agile Testing school. Therefore I raised my points on the Agile Testing mailing list. There were some valueable feedbacks and I identified the next points where clarification is necessary. As a side effect I started doubting the phrase “School” to do any good to the discussion.

First things first. One reply made me aware of the left-open question on Bret Pettichord’s foils: Do I have to pick a school? A similar discussion arose while compiling together the Software Craftsman’s Ethic. At one point we agreed that the ethics we wrote together so far should be viewed as one school of Software Craftsmanship. While reflecting on the term “school”, we came to the point where we tried to replace the term with guilde and the like. Answering Bret’s question, whether or not to pick a school, I refuse to do so. The Craftsmanship Manifesto teaches me one thing: The more practices you know and learn and maintain, the better prepared you are for your job at hand. This is the essence of the second statement of the manifesto and the “learn” portion of the ethics:

We consider it our responsibility 
  to hone our craft in pursuit of mastery;
    therefore, we 
      continuously explore new technologies and
      read and study the work of other craftsmen.

This means that I do not have to pick one school. By knowing how to combine together valueable lessons from each of the schools, I have a more powerful set of tools at hand. While replying to James Bach on this I realised that the combination of the Context-driven School and the Example-driven School is quite valueable. Gerald M. Weinberg wrote about this several decades ago:

…putting together two ideas to form a new one that’s better than either of it’s parents…

Copulation is a key to innovation; the opposite of it that gets into the way of creative new ideas is the No-Problem Syndrome. Read on in Becoming a Technical Leader.

My closing note: Thinking in schools seems to tend to thinking in drawers. Refuse to stop thinking while using the schools approach as a tool for reducing the complexity in order to get a suitable model that your brain can understand. You shouldn’t forget that humans are more complex than your mind might handle, though.

Example-driven School of Testing – Updates

After having a discussion on the Example-driven School of Testing with Michael Bolton, I realised that I missed some points. Being a human I truly believe that I am allowed to miss a point once in a while, the critical point is to realise this for yourself.

The first point Michael mentioned was the idea that whenever the acceptance tests pass, then we – the project team – are done. Indeed acceptance tests tell the project team that they’re not done when they don’t pass. This is a something different – check the Wason selection task for a explanation. (Just now I realise that I have given false kudos to James Bach for this quote. Sorry, Michael.) My reply was to view acceptance tests as a goal in terms of S.M.A.R.T.: Specific, Measurable, Attainable, Relevant and Time-bound. You can measure when you might have something to deliver, i.e. when you’re agreed examples from your specification workshops pass. You can measure when you might be ready for bringing the software to the customer and it’s specific and should be – of course – business-relevant. A friend of Michael Bolton put this this way:

When the acceptance tests pass, then you’re ready to give it to a real tester to kick the snot out of it.

This led my thought to a post from February this year from me: Testing Symbiosis. The main motivation behind this post was a discussion on the Software Testing mailing list about Agile and Context-driven testing. The plot outline of the story was Cem Kaner’s reply on that list on a article from Lisa Crispin and Janet Gregory leading to Janet leaving the list. Enough history.

The outline of Testing Symbiosis is that Exploratory Testing and Test Automation combine each other and rely on each other. Just automated testing can lead to problems on your user interface or on the user experiences of your product while just testing exploratory may lead to serious regression bugs. The wise combination of the two can lead to a well tested software product whose quality level is perceived high when shipped to your customer. Here is Michael Bolton’s summary after reading Testing symbiosis:

That said, I think that the role of the tester as an automator of high-level regression/acceptance tests has been oversold. I’m hearing more and more people agree with that. I think your approach is most sane: “On the other hand you can more easily automate software testing, if the software is built for automation support. This means low coupling of classes, high cohesion, easy to realize dependency injections and an entry point behind the GUI. Of course by then you will have to test the GUI manually, but you won’t need to exercise every test through the slow GUI with the complete database as backend. There are ways to speed yourself up.”

1) Automation support.
2) Entry points behind the GUI.
3) Human interaction with the GUI, more automation below it.

These two points seemed to need some clarifications. Feel free to remind me on the left-out corners that are still missing.

Example-driven School of Testing

Some years ago Bret Pettichord defined four schools of software testing: The Analytic School, the Standard School, the Quality School and the Context-Driven School. These ideas were incorporated into the book Lessons Learned in Software Testing: A Context-Driven Approach by James Bach, Cem Kaner and Bret Pettichord. Later on Bret included the Agile School of testing. Some days ago I realized that the name for the Agile School of testing is rather poor. This is the hypothesis and I would like to propose a new name based on the insights of the last few years for the thing Bret called the Agile school of testing. Read up on this in the extended body. Bret’s initial hypothesis was based on Agile Software development being mostly about test-driven development with massive usage of automated regression tests while the code is written. That’s why he included the following core beliefs of the Agile School of Testing:

  • Software is an ongoing conversation
  • Testing tells us that a development story is complete
  • Tests must be automated
  • Key Question: Is the story done?

Some time later Brian Marick wrote down what already was in the heads of many people: Tests in the Agile world are based on examples. Additionally Brian raises the point to renamed test-driven development to example-driven development, since this reflects testing in the agile context more appropriately.

A bunch of techniques ending in ‘dd’ for ‘-driven developenment’ appeared – mainly inspired by the Agile School that started with test-driven development. Among these are Acceptance Test-driven developement, behaviour driven-development or Domain Driven Design (yeah, right, this one does not end in ‘development’).

Back in February I was introduced to the Specification By Example by Gojko Adzic, who transferred the idea of Agile Acceptance Testing based on examples to a process of specification. As pointed out in one of his presentations on FIT and Agile Acceptance Testing, examples elaborate requirements or specifications. On the other hand examples can also become tests – and this is basically what the Agile School of Testing teaches us. Testing is based on noting down examples in order to test that you’re done with developing the feature in your current iteration.

Based on this I propose to rename the Agile School of Software Testing to the Example-driven School of Software Testing. On the other hand this would also make the statement clear that it’s not just about Agile, but rather about examples as Brian Marick initially pointed out. Another benefit of this term would be the distinction towards the Context-Driven School. I raise the point that we must truely understand that there is no either Example-based or Context-Driven, these two can be adapted together or one of them alone of neither of them. From my point of view these two school are able to co-exist and complete each other when applied together.

Practicing relevance

On the Software Craftsmanship conference in London End of February Micah Martin presented a session on where he presented an implementation of Langston’s Ant in Ruby. As an introduction he raised the point how important practice for a software craftsman is. He used his background in Martial Arts as a motivator where Katas are used to fit into uncomfortable positions in order to gain flexibility for the fight with an opponent. By over and over practicing these fight-unrelated moves, the aspirant learns for the fight. (Actually I remember the first Karate Kid film, where something alike happens, when the kid needs to paint a fence etc.)

By that time I did not fully agree to Micah’s point on practicing. However I made a note on my personal digest by then to research something suitable for me. Since I don’t have a background in Martial Arts, but in Swimming competition and as a swimming trainer, I was looking for analogies like how I teach kids to learn swimming with some learning chains getting from the easy to the complex and from the well-known to the unknown. These ideas did not quite match the Kata analogy from Micah, since they were related to how people learn in their minds and the bridge towards software development seems to be too far away here.

During the last week I found a better suitable analogy however, which is partially motivated from my experiences in swimming. As a trainer I need to be able to save anyone from drowning. While this sounds rather reasonable since I get to train kids that are entrusted to me by their parents, during the latest courses to fresh-up my knowledge on water-rescue I realzed, that the training for the german DLRG groups – the german baywatch – does over and over practice how to rescue someone from drowning. While the one being rescued might attempt to reach your neck in panic to just save herself, the life-saver might end up needing to save herself as well. While there are some grasps to free oneself out of such situations, the DLRG practices these over and over in order to be able to use them naturally when getting in a panic situation – since every second counts be then.

Similarly during first-aid courses which I needed to take several times as well I was told to practice reanimation process and how to get an injured person into recovery position. When people get under stress they forget what they have learned, if they did not practice over and over. Similarly when a software project gets under stress, people may forget to apply test-driven development, refactoring and design patterns and introduce technical debt. On the other hand by practicing seemingly unrelated software Katas in order to improve your tdd skills might enable you to excel your work even in serious situations. As for first-aid and water-rescue practicing, knowing and maintaining software development practices, but not having to use them every day, makes myself feel safer and relaxed.

Apprenticeship

Obtiva and 8th Light are currently stressing out Software Craftsmanship in an apprenticeship program between the two. Jim Suchy currently writes down experiences from each day on their blog. The entries for the first three days can be found here, here and here. While taking out the links for this entry, I also ran over Colin’s Apprenticeship.

As it turns out I ran into the idea two times during the last few weeks. One of our business analyst from our product development department stated some weeks ago that he has been asking for taking some kind of apprenticeship in the System Integration department in order to get to know the product he is defining. While the technical view of the product is rather clear to him, he stated that he is unsure of the actual use cases during System Integration work, where we configure most of the system for the end customer.

Some weeks earlier I ran over the idea as a tester to work on-site for your end customer in Lisa Crispin’s and Janet Gregory’s book. From the viewpoint of a tester I would be able to follow the mission to deliver working software to the customer in a better way when I had some experience on how the product will be used in the day-to-day work. By attending call-center calls and participating while our product is used to sell product to the end-consumers of our customers, I would get a very good picture of the business flow and the motivation behind it. Exactly what Gojko Adzic addresses in Bridging the Communication Gap.

Today I sat down with a former group lead in order to identify what I shall present on FIT end of next week. While we were discussing things I stated that two of my colleagues will be on vacational leave for four weeks during summer this year, since they will become fathers. While discussing the bottleneck in which we will get by then, he made the suggestion to exchange one of his testers with my group in order to get deeper knowledge on how we use FIT as a testing framework. Personally I think of it as an apprenticeship program, that could lead in improved communications between our two departments. Hopefully something great is going to be built out of this idea – but I’ll try to motivate and follow-up on this idea.

Mapping practices for Personnel development

Influenced by Adewale Oshineyes session on Mapping Personal Practices, I lead a meeting on personnel development with one of my testers two weeks ago. Earlier this year I started to do some work on personnel development for my workers and we already had come up with four to six major categories worthwhile to consider as evolution points: Testing, Programming, Product Knowledge, Teamwork, Technical Leadership and Leaderhship with personal responsibility. Incluenced by this previous session, we came up with a mind map similarly structured.


The colleague we drew the map for is rather junior. In the map you can see a division to the major categories mentioned. Let me try to explain on how we came to the final map.

Housekeeping

First of all we build up the map with the personal practices of my colleague. Motivated by the flipchart page of our first session in January and Adewale’s session on mapping personal practices, we started off with the items my colleague already does to a reasonable degree and has already reached a good level in doing so. Our initial session was based on the Shu-Ha-Ri principle of Skillset development and we picked those practices where we agreed he had reached Ha-level already. For the Testing category this includes writing tests, bugtracking and bug verification and execution of regression tests on a regular basis. Related to our company’s products we identified knowledge for one of the major Sub-Products and one Component related to it that was built over the last year and my worker had tested heavily during this time period. Related to Teamwork we agreed on a great skill to support our developers and enabling team communications through direct contact and collaboration. I consider him to have huge strengths in these fields and respect him for this. He mastered a computer science degree with a minor subject in psychology, which of course helps in this field. Since he is rather on a junior level, there seemed to be no relevant practices for the programming and technical leadership category, so we left them empty.

Development

We then started to discuss areas of improvement. Reconsidering each category, we were able to identify one point for improvements. For the testing portion we agreed on getting more knowledge related to User Acceptance Tests. Currently our teams have big problems with this item and on the last few Lessons Learned workshops during this year we realized that our company as a whole needs to improve in this field. Relying on his communication skills this is a good point to combine two areas, basically. Related to programming we agreed on the possibility to go out and ask for help when dealing with problems while doing some Fixture programming for our test automation. Here again I was happy to include his communication skills into this area of improvement by identifying this. Related to the knowledge of our product, we choose to get more insights into the second major component our Sub-Product has to deal with. This will not only give him a better overview of the value we deliver to our customer, but also enable him to find out hidden assumptions in the plug-ins we test for the Sub-Product 1 and that indirectly talk to Sub-Product 2. (I know, it’s hard to explain, if you cannot mention more context.) In the teamwork area we agreed on two major points for improvements in the whole organization and I think that he can help us work on these items a lot. We called this one Synchronisation. This is related to getting developers and testers to a common understanding of the solution, to include testers from the very beginning of the project and to include them in discussions about the solution. On the other hand we have co-workers spread all over the planet: Germany, Poland, Romenia, Spain, Italy, Brazil, Turkey, Ukraine, Malaysia. A usual project consists of people working in at least two countries. Therefore distributed collaboration is a major topic in our company. Related to his very good communication skills he can help us improve here. For technical leadership we both agreed that there needs to be improvement in the technical knowledge before starting a discussion here.

Conclusion

After finishing the mindmap, I was very proud of the result. During the next few months I will have to track the achievements in these fields, so we have again a discussion basis for the review we agreed to take place in early June. There is no impact on salary of the like based on the outcome. I just realized during the last year that I need to take care of my peers for personnel development since no one else did. Using mind maps on this improved the discussion a lot, I think. And I was able to incorporate his best skill – communication – into the remaining areas to help the organization as a whole. One side note I would like to add: Most of the practices and feedback came from my colleague. I tried to facilitate and help him find out ways to improve himself without demanding them. All of the discussion was of course done in collaboration, where the biggest contribution came from my worker. I also asked him afterwards if I will be allowed to publish it online here after anonymizing it. (I had to translate the german terms to english and get rid of and company’s product names.)

Software Testing, Craft, Leadership and beyond