Testing focus of Software Craftsmanship – Principles I

While the values might just seem to be a list of higher level goals with no direct outcome on the particular project, there is a lower level of definition in Agile methodologies: Principles. From Elisabeth Hendrickson and Lisa Crispin & Janet Gregory I was able to construct a list of useful principles. Here is the first half of this list of testing principles.

  • Provide Continuous Feedback

Feedback is one of the higher values for sucessful projects. As Poppendieck & Poppendieck pointed out in Lean Software Development – An Agile Toolkit feedback is vital to software development. They compare software development and manufacturing with cooking: While development or engineering means to come up with a recipe, manufacturing is just applying over and over the recipe that is already there. Since software development is a recipe building action, it needs iterations and feedback to be successful.

From this perspective testers provide feedback from the customer point-of-view into the programming activities. Likewise they provide feedback as a substitute for the end-consumer by adding exercising tests – may these be manual or automated. Like the memory structure in the current computer systems, testers can be thought of as the cache towards production for each program increment. This does not mean, that testers are the gatekeepers for any software deliveral. This rather means, that they can provide timely feedback to the programmers, rather than opening or closing the gate towards production. By addressing issues before the software gets into production, the feedback for the programming team is achieved in an efficient manner for them to learn.

  • Deliver Value to the Customer

Testers are the bridge between the programming team with its technical terms and the customers with their business language. By incorporating a solid ubiquitous language into the test cases, these two worlds can get closed and guarantee transparency for the customer. This higher level goal can be achieved by working directly with the customer on test cases and likewise giving feedback to the team members who are developing the software the customer ordered. A test which is twisted up with complex logic is no value for the customer. A test case turning business terms green when it’s passing on the other hand is. Testers provide the service of filling the communication gap between technical software development and the business of the end user.

  • Enable Face-to-Face Communication

Alistair Cockburn points out the principle, that two people at a whiteboard are communicating more efficient than two people over email or videotape. He visualizes this principle in a well understandable manner. When considering software testing, a tester should therefore motivate and enable direct communication forms wherever possible. When two people are talking face-to-face on a difficult requirement, they efficiently get a common understand of it. Therefore tester should enable direct communication to customers, to developers, to department heads, to project management, …

Lisa Crispin and Janet Gregory introduce the Law Of Three in their book on Agile Testing. The Law Of Three states, that any design decision or requirement discussion need to involve one person from all of the three groups: customers, developers and testers. Whenever a discussion is started, a decision need to involve all affected groups. Direct face-to-face communication supports this law by building up efficient communication channels between the three.

  • Keep it Simple

Simplicity is a key in Agile Software Development. Without the waste of usual software processes teams maintain their ability for agility. This directly means to keep your test cases as clear as possible. Additionally through simplicity you enable your team to understand your tests and be able to adapt it if the business demand this. Just like the Collective Code-Ownership practice in eXtreme Programming, testing teams have the demand for collective test case ownership. Just through simplicity you not only enable your co-testers, but also the business analysts, developers and customers to adapt test cases when the situation asks for this.

  • Practice Continuous Improvement

This principle is simply demanded by the craft aspect behind Software Development. Just like any new cool programming language or technique, testers should keep their improvement potential up-to-date. By visiting conferences on test techniques or latest test automation tools, a tester can improve her skills in these areas. This also means, that learning a new programming language or test technique such as performance- or load-testing is a way to improve one’s own abilities. Learning a common language to express testing patterns is another example of it. By providing this pattern language to your teammates, you enable them to learn from you and additionally you improve your speaking skills. These are just some parts of your daily work where you can practice improvement on yourself and your team. Look out for additional possibilities.

  • Reduce test documentation overhead

As Mary and Tom Poppendieck pointed out in Lean Software Development – An Agile Toolkit wasteful tasks should be avoided in order to have the highest possible return on investment. This means reducing test plans, test design documents and test protocols using tool support and ubiquitous language to a minimum. This does not mean to drop any test documentation. Rather your team and your customer should be brought in to the discussion of the level of necessary documentation. In case you’re working in safety critical software or your organisation demands it due to CMMI or ISO9000 compliance, you will be forced to use more extensive test documentation. Look out for ways to reduce these to a minimum and avoid wasteful documentation, which needs adaptation to the current situation every week or even on a daily basis.

Testing symbiosis

Elisabeth Hendrickson raised a point on her blog on test automation. She raises a very good point on the time-value of information. She states that just as money today is more worth than money tomorrow, information known today is more valueable than knowing the same thing tomorrow. Agile practices such as Fail Fast and iterative development enable teams to know more informations when making a decision. This is also true for (A)TDD of course.

While reading through the comments of Elisabeth’s blog entry, I noticed James Bach’s thoughts on it and felt to have to respond on it immediately. One thing I realised lately is, that there seems to be a discussion about whether to automate testing or not on-going between testers from the Agile school of testing and the context-driven testing school. Personally while trying to answer Bret Pettichord’s question whether I have to pick a school of testing or not, I feel, that both schools are right to some degree. This does not mean, that they are wrong to some other degree, of course. The point here is, that this is no well-structured problem – you’re faced with well-structured problems in elementary school. Software and by then software testing as well is an ill-structured problem of our adult world. There is no test automation is always right or sapient testing is always right out there.

Despite thinking in just two categories, I encourage to think of test automation and manual testing as a symbiosis. You can do a lot more manual testing, if you have covered those tedious to repeat tests with automation. On the other hand you can more easily automate software testing, if the software is built for automation support. This means low coupling of classes, high cohesion, easy to realize dependency injections and an entry point behind the GUI. Of course by then you will have to test the GUI manually, but you won’t need to exercise every test through the slow GUI with the complete database as backend. There are ways to speed yourself up. If you’re lucky, they are open for you – this seems to be the case on most Agile developed projects. If you’re not lucky, you have to find ways where scripting or automation can speed up your tedious manual test cases. But having both in place together is the best choice you can take from my point of view.

Black-Belt Testing

Matt Heusser gave me a black belt for my reply to his black-belt testing challenge. We exchanged some thoughts via e-mail. If you would like to get to know, what to do in order to get one, too, you should read his blog entries on this: Black-Belt Testing Challenge. Maybe I will decide to post my replies to his challenge in some weeks from now. But since the challenge is ongoing, I decided to keep it private for the moment.

Matt’s profile was interesting to me:

I’m a software craftsman with an interest in testing, project management, development, how people learn and systems improvement.

I just realised that his statement nearly fully describes the view I have from myself as well. The ideas we exchanged via e-mail were pretty interesting and I’m now looking forward to meet him in personal.

Testing focus of Software Craftsmanship – Values

Some weeks ago I was first made aware of Software Craftsmanship. By the same time I had run over Bob Martin’s fifth Agile value and Alistair Cockburn’s New Software Engineering. Just today I strived over a topic opened from Joshua Kerievsky on the XP Mailing List: The Whole Enchilada just to find another blog entry from Dave Rooney on the underlying thoughts. For me it seems there is something upcoming, there is something in the air and I would like to share my current picture of the whole from a testing perspective. Even in this week there was a discussion on the Agile Testing group being started by Lisa Crispin’s blog entry on The Whole Team. I decided to organise these sorts in a series of postings to come during the next few weeks. This time I would like to start with values from Agile methodologies. First of all I haven’t read every book on every topic around Agile Testing and Software Craftsmanship so far. There are books on technical questions on my bookshelf as well as managerial books that I would like to get into. Additionally I have not had the opportunity to get to know an Agile team in action – though my team did a really good job moving the whole test suite from a shell script based approach to a business facing test automation tool during the last year. My company came up with a new project structure during that time and I just lately noticed, that – similar to Scrum – there is a flaw of technical factors in the new project structure. The new organisation seems to focus on just managerial aspects of software development without advises on how to gain technical success.

That said, I started to read on Agile methodologies during the last year a lot. The idea of Agile value and principles still is fascinating me. It even fascinates me so far, that I would like to compile a list of factors to notice during day-to-day work. Since Agile methodologies use the Practices, Principles and Values scheme to describe the underlying concepts – during the last year I noticed a parallel to ShuHaRi – I would like to come up with a similar structure. Here are the values from eXtreme Programming and Scrum combined into a single list:

  • Communication

Human interactions focus on a large amount of communication. This item on the list is particular related to the first value from the Agile Manifesto: Individuals and interactions over processes and tools. Likewise Alistair Cockburn introduced the concept of Information Radiators in order to even combine communications and feedback on publicly available whiteboards or flipcharts pages. In the software development business the right way to communicate can reduce a lot of wasted efforts with assumed functionality. Communication therefore as well serves the principle from Lean Software Development to eliminate waste.

  • Simplicity

The case for simplicity arose the first time when my team was suffering from a legacy test approach, which dealt with too many dependencies and a chained-test syndrom. Simply spoken: The tests flawed the simplicity value. Changing one bit on the one function forced changing several tests on the other side of the test framework. Due to the high-coupling nature that was caused by no particular design rules and no efforts spent on paying down Technical Debt, adapting test cases to the business needs was not simple. The high complexity in the test code base was the starting point for this lack of simplicity. By incorporating design patterns, refactoring and test-driven development we were able to handle this lack of simplicity in our currently used approach. Additionally one thing that I learned from Tom DeMarco’s The Deadline is, that when incorporating complex interfaces in the software system you’re building, you also make the interfaces between the humans involved equally complex. This results directly in a higher amount of communication necessary to compensate – a fact that Frederick Brooks noticed nearly fourty years ago in The Mythical Man Month.

  • Feedback

As described before on the communication topic, information radiators which spread informations i.e. about the current status of the build in Continuous Integration or about remaining storypoints on a burndown chart make feedback visible. There is even more to feedback. Continuous Integration is a practice which leads to continuous feedback of repetetive tasks such as unit tests, code inspections and the like. The feedback gathered after each iterations’s end is another point to exercise continuous improvement of the whole team. When feedback is gathered quickly, according actions can be taken by the individuals involved.

  • Courage

Each functional team has to address problems and come up with proposed solutions. In order to state underlying problems you need to have courage to bring up topics that might throw the project behind the schedule. On the other hand staying quiet about these problems, might directly result in Technical Debt and as Dave Smith has put it:

TechnicalDebt is a measure of how untidy or out-of-date the development work area for a product is.
Definition of Technical Debt

There are other topics, where each team member needs to have courage. Here is a non-complete list to give you a vision of it:

  • when organizational habits are counter-productive to team or project goals
  • when working in a dysfunctional team
  • when the code complexity raises above a manageable threshold
  • Respect

Respecting each team member for their particular achievements and technical skill-sets is to my understanding part of a functional team definition. When having a respectful work-environment, a tester is more likely to take the courage to raise flaws of code-metrics or violated coding-conventions. When raising issues of other’s work-habits it is more likely to have a constructive way of solving problems by sending out a Congruent Message. When each team member has the respect on the technical skills of each other team member it is more likely to occur even in problematic situations. In a respectful atmosphere critics are not made in the form of accusation and therefore lead directly to constructive and creative solutions rather than protecting behaviour of the opposing parties.

  • Commitment

The whole team gives the commitment to the customer to deliver valueable quality at the end of each iteration. Without the commitment of each team member to deliver value to their customer, the project success is put onto stake. Leaving out unit tests on critical functions may blow the application once in production. Likewise left-out acceptance tests may lead to a time bomb exploding during customer presentation while doing some exploratory tests. Likewise the team gives the commitment to their organisation to produce the best product they can in order for the organisation to make money with it. The flip-side of the coin will lead to distrust from organisational management. Committing to the vision of the project, the goal of the iteration, is a basic value for each team.

  • Focus

Focus is an essential value behind testing. If you easily distract yourself with follow-up testing, you may find yourself with a bunch of testcases exercised without following your initial mission to find critical bugs fast. Sure, it’s relevant to do some follow-up testing on bugfixes that just occured during the last changes, but if you loose your focus on the particular testing mission, you are also missing to deliver the right value to your customer. Face-to-Face Communication and Continuous Improvement help you keep your focus on the right items, while a simple system supports you in the ability to focus on the harder-to-test cases of your software.

  • Openness

If you would like to provide your customer the best business value possible, it might turn out, that you need to be open to new ideas. Particularly business demands change due to several factors: market conditions, cultural or organisational habits. As Kent Beck points out in eXtreme Programming eXplained:

The problem isn’t change, because change is going to happen; the problem, rather, is our inability to cope with change.

Without openness a tester is not able to cope with the change that is going to happen – may it be to technical needs or just the business demanding the underlying change.

Testing Libraries vs. Frameworks

Lisa Crispin reported last week on the difference between Test Libraries and Frameworks. When reading her blog entry I felt the urge to comment to it on the approach we used at work for our two-step based testing functions. In the end I figured that I quiet had not understood the point on where the difference between test libraries and frameworks might be. Here is the comment I made to the blog-entry:

Since I’m working in the specialized business of mobile phone rating and billing, we introduced our own framework for testing. During the last year we switched our legacy test cases, that were fragile and suffering from high maintenance costs, to a FIT based approach. We came up with two steps for this.

We have built up a toolkit with re-usable tools, that our lightweight and most probably being reused on the next project. These additionally need to be independent of particular customer terminologies. From my point of view this portion is a test library for our product.

The other part is an incorporating one. We build up Fixtures, that are customer dependent – from project to project. We use the toolkit low-level components for this as far as possible and stick to subclassing or interface implementation, where needed.

Historically they grew from the customer specific fixture part towards re-usable components in the toolkit. So we start off with working software and peel out the details, that we are likely to re-use after having a working copy of it.

Together both form a testing framework. I don’t seem to get your point on the difference of a library and a framework fully. Our low-level function toolkit seems to be a test library, when incorporated with customer terminology we get a framework, where we simply need to write down the test cases in some tables.

Today I took the time to look-up the definition of a software library and a Software Framework on WikiPedia. After reading the definitions from WikiPedia and retrospecting my own words in the comment on Lisa’s blog, I today claim that we came up with a test library (our ToolKit) and combined with FitNesse and other third-party libraries – i.e. JDom, J2EE, etc. – and our customer dependend fixture code (we called this our Fixtures) we have a testing framework for our software product. Any other thoughts on this?

Beware of your Green Field Messes

This week I paired together with a colleague from another team. We were reworking one of our approaches and had started with implementing one of the class responsibilities anew. We wanted to improve the performance on the overall test suite and needed to take a look into several classes in order to get to know where the seconds and milliseconds got lost. I already knew there was one class which violates the Single Responsibility Principle and that I was striving looking into the code of that class since several weeks. Basically I was considering two options: Either we would start rewriting the major classes around or tests anew in order to improve the performance or boil down the technical debt, that caused our performance problems in the existing code.

We decided to start with the first option. Therefore we spent all Monday on some Ping-Pong-Programming for the new class we were building. When we tried it out, from functional point of view it worked. But when we wanted to incorporate our new class in the old code, we realized that this will be a major change. Therefore we started to take a look into the code. Soon we would realize that there were some simpler problems in the old classes and fixing them would be easier than the re-writing over all.

Some weeks ago a blog entry from Robert Martin had made myself aware that I would need to clean up my mud myself. Anyways I had to make this experience myself to realize that his statement was absolutely wrong. Hopefully I will be smarter next time.

Software Testing Craftsmanship influence on personal development plans

Since I will be attending the Software Craftsmanship Conference in London on 26th February 2009, I started to making up my mind on what I believe to be in the field of Software Testing Craftswomen and -men. This entry is a follow-up on Skillset Development, which I came up with earlier. Since I wrote the just mentioned article, I started to think about the Craftsmanship aspect of Software Testing. While I previously just thought on development of a skill in ShuHaRi terms, Craftsmanship describes a combination of skills as apprentice, journeyman and master. (Usually I compare this with Star Wars Jedi ranks: Padawan, Jedi Knight and Jedi Master, but I don’t know if George Lucas has the copyrights on these terms.)

This year I took some time with my two sub-workers and discussed a personal devleopment plan with each of them. The results seemed to be successful for both of them, so I decided to share the approach we took. Basically I began with the introduction of ShuHaRi for individual skill development, also mentioning the part of the interrelationships – there is Shu in the Ha stage and both Shu and Ha in the Ri stage. For visualisation purposes I painted three circles in the inner-most I wrote Shu, the middle one I denoted with Ha and the outer-most circle was labelled Ri.

Then I began to introduce the craftsmanship aspect of our business. I wrote down on a opposed side of the whiteboard the terms apprentice, journeyman and master and introduced that we met in order to get a common picture where the individual colleague would classify himself and what we could do in order to move the classification down in the next year, i.e. from journeyman to master on the scale. While the classification seemed to be not very obvious and fluent, it gave us a good rule of thumb in the process to follow.

After introducing my view on what targets we could settle for a personal development plan for each of the two, we divided the work we do in our test group into subsequent groups, which we identified on a individual basis. Here are the main categories we came up with for both:

  • Technical skills
  • Leading skills
  • Collaboration skills

Please note that I consider these categories to be team-, culture- and organisation-dependent. On the technical skills we came up with test methodology as discussed by too many books out there and programming skill for test automation. During the past year we introduced the Framework for Integrated Tests in our team. Since we could not build upon much support from our development team, we had to write the fixture code ourselves and I introduced many aspects from test-driven development, design patterns and refactoring combined with pair programming and continuous integration. Therefore the programming skills were included in both sessions.

For the leading part I described that I see two parts for applying leading skills in our group. Either on the technical lead within our project structure or as a group leader of the group. Since the latter job is quite taken by me, the opportunities for technical lead remained. While I put one of my colleagues into a major project as a technical lead for the testing part, this topic was very interesting for him. My other colleague told me that he thinks he is far enough on the technical side to become a leader there. That seemed reasonable to me, so we did not spend much time on this topic.

The last main category just covered skills you need in order to collaborate with people outside our group. This includes for example presentations on topics, working together with people from the development team and project management and etiquette. This topic I found a bit difficult to manage during the sessions, but we made it through.

After discussing each sub-items we would like to take a look on, we first of all identified where each of the two was currently located based on a categorization of ShuHaRi. Jointly we went through each major category, identified topics we would like to consider and wrote one to three things down together with a good guess where the individual saws himself. After getting the current development status, we were ready to review the list and identify steps to work on in the current year. For each previously mentioned item we discussed whether we need to improve and what we can do about this. On some of the topics we were able to identify two to three items we would like to try out. All this was written down on a flipchart page, which we took afterwards with us.

When we were finished with each individual list, we agreed on a review date for the list. The more junior colleague suggested to take a look back on it after 3 months, my other one suggested 6 months as a period. After talking to each of them, we additionally agreed to hang the flipchart out in our office right at their desk as a reminder in our day-to-day work.

I would appreaciate feedback on this approach in the comments. If you have some thoughts on how I could improve this process, I would be glad if you wrote down a note on this. Hopefully I will be able to come up with a list of skills I see relevant for a software tester in some more months by following this approach.

Impressions from a Black Belt Testing Challenge

Two weeks ago Matt Heusser asked for participants for a Black Belt Testing Challenge. Since I could not stand the challenge, I wrote him an E-Mail that I would like to take part on this challenge. The challenge consisted of watching a video and to defend my personal view on it. When I wrote him my reply to the video and how I see the things around the test automation strategy shown in the video from a technical talk at a conference, we started to combine each others thoughts on this. By doing so we each other realized that there is even more to testing than one might got to know from the experiences in the past. From my point we were hit by the realisation of the context-driven testing approach as lately made clear by Cem Kaner.

In the end I was happy to read this line from Matt:

You are the first person to successfully step up to the challenge!

Technical Debt applied to Testing

During the last two days one thing puzzled me. Yesterday I had our annual review meeting at work and during that session one topic came to speak, over which I happened to think the whole evening. Since I had read that morning on Sean Lendis’ blog about Technical Debt, the resolution I came up with, was that an occurence of something that I would like to define as Technical Test Debt Maybe someone already gave a different name to this, maybe it’s just unprofessionality, but I’ll call it Technical Test Debt for now. In order to introduce Technical Test Debt, I start by revising Technical Debt. Here is a quotation from Ward Cunningham’s site which helds the first definition:

  • Technical Debt includes those internal things that you choose not to do now, but which will impede future development if left undone. This includes deferred refactoring.
  • Technical Debt doesn’t include deferred functionality, except possibly in edge cases where delivered functionality is “good enough” for the customer, but doesn’t satisfy some standard (e.g., a UI element that isn’t fully compliant with some UI standard).

A key point I see fulfilled around Technical Debt is, that it seems to occur on Agile teams as well as on non-Agile, Waterfall, V-Modell-, whatever-teams for the development part. One thing for sure is difficult to do: Measure Technical Debt. James Shore came up with a definition in the last year, but that did not fully satisfy me so far. To quote Martin Fowler here:

The tricky thing about technical debt, of course, is that unlike money it’s impossible to measure effectively.

To give my point a little bit more context, in the past I was just involved in test automation work at my current company. We have a high attitude to automate most of the tests we do in order to run them several times before the software gets to the customer. For some hard-to-test components however we decided during the last year to not do this and just rely on manual tests. We ran into a problem when we were faced with the human factor of software development. Our Release Management group, which provide the software packages to the customer, packaged some older version of that stand-alone component we just tested manually. Since we were not expecting the software to change, we had not run those manual tests before bringing the patch to production. Due to confusion while generating the package, it contained too much, where one software component was outdated and led to a production problem – which gladly could be fixed by some high responsible developer. My resolution here yesterday was, that I had come across Technical Test Debt. Since we were not expecting the software component to change after the first approval from our customer and test automation seemed to be a high cost at the time we made the decision, we refused to do any automated test cases, that would verify the software for each delivered package. We were pretty aware of this issue, but we did not take the time and effort to pay off this Technical Test Debt and have the software component forseen with automated test cases, that would run over each of our created package, before screwing up the production system.

Another thing I would like to call Technical Test Debt concerns test automation as well. During the first days at my current company, we used shell scripts for test automation. There was a quite impressive set of shell-script functions, which in combination automated most of the work for regression tests. The debt here is similar to Technical Debt in software: Since no experienced designers were assigned to the group in order to lead future improvements into a manageable direction, the shell-function codebase grew and other the rushes of several projects noone really got the time assigned to pay off this Technical Debt in the test automation codebase. After some internal reorganisation my team took the hard line and exchanged our legacy shell-function based test automation framework through FitNesse. By doing so we managed to gain an order of a magnitude improvement by keeping Technical Debt in the automation codebase as low as possible. We used regular refactoring, unit tests to communicate intent and pair programming as key factors there, but I also see some short-cuts currently on one class or the other, where I directly know, that this Technical Debt was introduced by the first point from the list above.

While I’m pretty much convinced, that the first of my two points is clearly Technical Test Debt, the second point is discussable. I like to define test automation as a software development effort with all the underlying assupmtions on the automation code. While picking this view, I would say that the second point is just Technical Debt, that occured during a software test automation. Another thought that strikes me on the first point, is that Technical Test Debt in terms of unmade tests, might have high risks attached to it. That’s why a tester needs to know about risk management in the software under test.

Software Testing, Craft, Leadership and beyond