All posts by Markus Gärtner

Testing focus of Software Craftsmanship – Values

Some weeks ago I was first made aware of Software Craftsmanship. By the same time I had run over Bob Martin’s fifth Agile value and Alistair Cockburn’s New Software Engineering. Just today I strived over a topic opened from Joshua Kerievsky on the XP Mailing List: The Whole Enchilada just to find another blog entry from Dave Rooney on the underlying thoughts. For me it seems there is something upcoming, there is something in the air and I would like to share my current picture of the whole from a testing perspective. Even in this week there was a discussion on the Agile Testing group being started by Lisa Crispin’s blog entry on The Whole Team. I decided to organise these sorts in a series of postings to come during the next few weeks. This time I would like to start with values from Agile methodologies. First of all I haven’t read every book on every topic around Agile Testing and Software Craftsmanship so far. There are books on technical questions on my bookshelf as well as managerial books that I would like to get into. Additionally I have not had the opportunity to get to know an Agile team in action – though my team did a really good job moving the whole test suite from a shell script based approach to a business facing test automation tool during the last year. My company came up with a new project structure during that time and I just lately noticed, that – similar to Scrum – there is a flaw of technical factors in the new project structure. The new organisation seems to focus on just managerial aspects of software development without advises on how to gain technical success.

That said, I started to read on Agile methodologies during the last year a lot. The idea of Agile value and principles still is fascinating me. It even fascinates me so far, that I would like to compile a list of factors to notice during day-to-day work. Since Agile methodologies use the Practices, Principles and Values scheme to describe the underlying concepts – during the last year I noticed a parallel to ShuHaRi – I would like to come up with a similar structure. Here are the values from eXtreme Programming and Scrum combined into a single list:

  • Communication

Human interactions focus on a large amount of communication. This item on the list is particular related to the first value from the Agile Manifesto: Individuals and interactions over processes and tools. Likewise Alistair Cockburn introduced the concept of Information Radiators in order to even combine communications and feedback on publicly available whiteboards or flipcharts pages. In the software development business the right way to communicate can reduce a lot of wasted efforts with assumed functionality. Communication therefore as well serves the principle from Lean Software Development to eliminate waste.

  • Simplicity

The case for simplicity arose the first time when my team was suffering from a legacy test approach, which dealt with too many dependencies and a chained-test syndrom. Simply spoken: The tests flawed the simplicity value. Changing one bit on the one function forced changing several tests on the other side of the test framework. Due to the high-coupling nature that was caused by no particular design rules and no efforts spent on paying down Technical Debt, adapting test cases to the business needs was not simple. The high complexity in the test code base was the starting point for this lack of simplicity. By incorporating design patterns, refactoring and test-driven development we were able to handle this lack of simplicity in our currently used approach. Additionally one thing that I learned from Tom DeMarco’s The Deadline is, that when incorporating complex interfaces in the software system you’re building, you also make the interfaces between the humans involved equally complex. This results directly in a higher amount of communication necessary to compensate – a fact that Frederick Brooks noticed nearly fourty years ago in The Mythical Man Month.

  • Feedback

As described before on the communication topic, information radiators which spread informations i.e. about the current status of the build in Continuous Integration or about remaining storypoints on a burndown chart make feedback visible. There is even more to feedback. Continuous Integration is a practice which leads to continuous feedback of repetetive tasks such as unit tests, code inspections and the like. The feedback gathered after each iterations’s end is another point to exercise continuous improvement of the whole team. When feedback is gathered quickly, according actions can be taken by the individuals involved.

  • Courage

Each functional team has to address problems and come up with proposed solutions. In order to state underlying problems you need to have courage to bring up topics that might throw the project behind the schedule. On the other hand staying quiet about these problems, might directly result in Technical Debt and as Dave Smith has put it:

TechnicalDebt is a measure of how untidy or out-of-date the development work area for a product is.
Definition of Technical Debt

There are other topics, where each team member needs to have courage. Here is a non-complete list to give you a vision of it:

  • when organizational habits are counter-productive to team or project goals
  • when working in a dysfunctional team
  • when the code complexity raises above a manageable threshold
  • Respect

Respecting each team member for their particular achievements and technical skill-sets is to my understanding part of a functional team definition. When having a respectful work-environment, a tester is more likely to take the courage to raise flaws of code-metrics or violated coding-conventions. When raising issues of other’s work-habits it is more likely to have a constructive way of solving problems by sending out a Congruent Message. When each team member has the respect on the technical skills of each other team member it is more likely to occur even in problematic situations. In a respectful atmosphere critics are not made in the form of accusation and therefore lead directly to constructive and creative solutions rather than protecting behaviour of the opposing parties.

  • Commitment

The whole team gives the commitment to the customer to deliver valueable quality at the end of each iteration. Without the commitment of each team member to deliver value to their customer, the project success is put onto stake. Leaving out unit tests on critical functions may blow the application once in production. Likewise left-out acceptance tests may lead to a time bomb exploding during customer presentation while doing some exploratory tests. Likewise the team gives the commitment to their organisation to produce the best product they can in order for the organisation to make money with it. The flip-side of the coin will lead to distrust from organisational management. Committing to the vision of the project, the goal of the iteration, is a basic value for each team.

  • Focus

Focus is an essential value behind testing. If you easily distract yourself with follow-up testing, you may find yourself with a bunch of testcases exercised without following your initial mission to find critical bugs fast. Sure, it’s relevant to do some follow-up testing on bugfixes that just occured during the last changes, but if you loose your focus on the particular testing mission, you are also missing to deliver the right value to your customer. Face-to-Face Communication and Continuous Improvement help you keep your focus on the right items, while a simple system supports you in the ability to focus on the harder-to-test cases of your software.

  • Openness

If you would like to provide your customer the best business value possible, it might turn out, that you need to be open to new ideas. Particularly business demands change due to several factors: market conditions, cultural or organisational habits. As Kent Beck points out in eXtreme Programming eXplained:

The problem isn’t change, because change is going to happen; the problem, rather, is our inability to cope with change.

Without openness a tester is not able to cope with the change that is going to happen – may it be to technical needs or just the business demanding the underlying change.

Testing Libraries vs. Frameworks

Lisa Crispin reported last week on the difference between Test Libraries and Frameworks. When reading her blog entry I felt the urge to comment to it on the approach we used at work for our two-step based testing functions. In the end I figured that I quiet had not understood the point on where the difference between test libraries and frameworks might be. Here is the comment I made to the blog-entry:

Since I’m working in the specialized business of mobile phone rating and billing, we introduced our own framework for testing. During the last year we switched our legacy test cases, that were fragile and suffering from high maintenance costs, to a FIT based approach. We came up with two steps for this.

We have built up a toolkit with re-usable tools, that our lightweight and most probably being reused on the next project. These additionally need to be independent of particular customer terminologies. From my point of view this portion is a test library for our product.

The other part is an incorporating one. We build up Fixtures, that are customer dependent – from project to project. We use the toolkit low-level components for this as far as possible and stick to subclassing or interface implementation, where needed.

Historically they grew from the customer specific fixture part towards re-usable components in the toolkit. So we start off with working software and peel out the details, that we are likely to re-use after having a working copy of it.

Together both form a testing framework. I don’t seem to get your point on the difference of a library and a framework fully. Our low-level function toolkit seems to be a test library, when incorporated with customer terminology we get a framework, where we simply need to write down the test cases in some tables.

Today I took the time to look-up the definition of a software library and a Software Framework on WikiPedia. After reading the definitions from WikiPedia and retrospecting my own words in the comment on Lisa’s blog, I today claim that we came up with a test library (our ToolKit) and combined with FitNesse and other third-party libraries – i.e. JDom, J2EE, etc. – and our customer dependend fixture code (we called this our Fixtures) we have a testing framework for our software product. Any other thoughts on this?

Beware of your Green Field Messes

This week I paired together with a colleague from another team. We were reworking one of our approaches and had started with implementing one of the class responsibilities anew. We wanted to improve the performance on the overall test suite and needed to take a look into several classes in order to get to know where the seconds and milliseconds got lost. I already knew there was one class which violates the Single Responsibility Principle and that I was striving looking into the code of that class since several weeks. Basically I was considering two options: Either we would start rewriting the major classes around or tests anew in order to improve the performance or boil down the technical debt, that caused our performance problems in the existing code.

We decided to start with the first option. Therefore we spent all Monday on some Ping-Pong-Programming for the new class we were building. When we tried it out, from functional point of view it worked. But when we wanted to incorporate our new class in the old code, we realized that this will be a major change. Therefore we started to take a look into the code. Soon we would realize that there were some simpler problems in the old classes and fixing them would be easier than the re-writing over all.

Some weeks ago a blog entry from Robert Martin had made myself aware that I would need to clean up my mud myself. Anyways I had to make this experience myself to realize that his statement was absolutely wrong. Hopefully I will be smarter next time.

Software Testing Craftsmanship influence on personal development plans

Since I will be attending the Software Craftsmanship Conference in London on 26th February 2009, I started to making up my mind on what I believe to be in the field of Software Testing Craftswomen and -men. This entry is a follow-up on Skillset Development, which I came up with earlier. Since I wrote the just mentioned article, I started to think about the Craftsmanship aspect of Software Testing. While I previously just thought on development of a skill in ShuHaRi terms, Craftsmanship describes a combination of skills as apprentice, journeyman and master. (Usually I compare this with Star Wars Jedi ranks: Padawan, Jedi Knight and Jedi Master, but I don’t know if George Lucas has the copyrights on these terms.)

This year I took some time with my two sub-workers and discussed a personal devleopment plan with each of them. The results seemed to be successful for both of them, so I decided to share the approach we took. Basically I began with the introduction of ShuHaRi for individual skill development, also mentioning the part of the interrelationships – there is Shu in the Ha stage and both Shu and Ha in the Ri stage. For visualisation purposes I painted three circles in the inner-most I wrote Shu, the middle one I denoted with Ha and the outer-most circle was labelled Ri.

Then I began to introduce the craftsmanship aspect of our business. I wrote down on a opposed side of the whiteboard the terms apprentice, journeyman and master and introduced that we met in order to get a common picture where the individual colleague would classify himself and what we could do in order to move the classification down in the next year, i.e. from journeyman to master on the scale. While the classification seemed to be not very obvious and fluent, it gave us a good rule of thumb in the process to follow.

After introducing my view on what targets we could settle for a personal development plan for each of the two, we divided the work we do in our test group into subsequent groups, which we identified on a individual basis. Here are the main categories we came up with for both:

  • Technical skills
  • Leading skills
  • Collaboration skills

Please note that I consider these categories to be team-, culture- and organisation-dependent. On the technical skills we came up with test methodology as discussed by too many books out there and programming skill for test automation. During the past year we introduced the Framework for Integrated Tests in our team. Since we could not build upon much support from our development team, we had to write the fixture code ourselves and I introduced many aspects from test-driven development, design patterns and refactoring combined with pair programming and continuous integration. Therefore the programming skills were included in both sessions.

For the leading part I described that I see two parts for applying leading skills in our group. Either on the technical lead within our project structure or as a group leader of the group. Since the latter job is quite taken by me, the opportunities for technical lead remained. While I put one of my colleagues into a major project as a technical lead for the testing part, this topic was very interesting for him. My other colleague told me that he thinks he is far enough on the technical side to become a leader there. That seemed reasonable to me, so we did not spend much time on this topic.

The last main category just covered skills you need in order to collaborate with people outside our group. This includes for example presentations on topics, working together with people from the development team and project management and etiquette. This topic I found a bit difficult to manage during the sessions, but we made it through.

After discussing each sub-items we would like to take a look on, we first of all identified where each of the two was currently located based on a categorization of ShuHaRi. Jointly we went through each major category, identified topics we would like to consider and wrote one to three things down together with a good guess where the individual saws himself. After getting the current development status, we were ready to review the list and identify steps to work on in the current year. For each previously mentioned item we discussed whether we need to improve and what we can do about this. On some of the topics we were able to identify two to three items we would like to try out. All this was written down on a flipchart page, which we took afterwards with us.

When we were finished with each individual list, we agreed on a review date for the list. The more junior colleague suggested to take a look back on it after 3 months, my other one suggested 6 months as a period. After talking to each of them, we additionally agreed to hang the flipchart out in our office right at their desk as a reminder in our day-to-day work.

I would appreaciate feedback on this approach in the comments. If you have some thoughts on how I could improve this process, I would be glad if you wrote down a note on this. Hopefully I will be able to come up with a list of skills I see relevant for a software tester in some more months by following this approach.

Impressions from a Black Belt Testing Challenge

Two weeks ago Matt Heusser asked for participants for a Black Belt Testing Challenge. Since I could not stand the challenge, I wrote him an E-Mail that I would like to take part on this challenge. The challenge consisted of watching a video and to defend my personal view on it. When I wrote him my reply to the video and how I see the things around the test automation strategy shown in the video from a technical talk at a conference, we started to combine each others thoughts on this. By doing so we each other realized that there is even more to testing than one might got to know from the experiences in the past. From my point we were hit by the realisation of the context-driven testing approach as lately made clear by Cem Kaner.

In the end I was happy to read this line from Matt:

You are the first person to successfully step up to the challenge!

Technical Debt applied to Testing

During the last two days one thing puzzled me. Yesterday I had our annual review meeting at work and during that session one topic came to speak, over which I happened to think the whole evening. Since I had read that morning on Sean Lendis’ blog about Technical Debt, the resolution I came up with, was that an occurence of something that I would like to define as Technical Test Debt Maybe someone already gave a different name to this, maybe it’s just unprofessionality, but I’ll call it Technical Test Debt for now. In order to introduce Technical Test Debt, I start by revising Technical Debt. Here is a quotation from Ward Cunningham’s site which helds the first definition:

  • Technical Debt includes those internal things that you choose not to do now, but which will impede future development if left undone. This includes deferred refactoring.
  • Technical Debt doesn’t include deferred functionality, except possibly in edge cases where delivered functionality is “good enough” for the customer, but doesn’t satisfy some standard (e.g., a UI element that isn’t fully compliant with some UI standard).

A key point I see fulfilled around Technical Debt is, that it seems to occur on Agile teams as well as on non-Agile, Waterfall, V-Modell-, whatever-teams for the development part. One thing for sure is difficult to do: Measure Technical Debt. James Shore came up with a definition in the last year, but that did not fully satisfy me so far. To quote Martin Fowler here:

The tricky thing about technical debt, of course, is that unlike money it’s impossible to measure effectively.

To give my point a little bit more context, in the past I was just involved in test automation work at my current company. We have a high attitude to automate most of the tests we do in order to run them several times before the software gets to the customer. For some hard-to-test components however we decided during the last year to not do this and just rely on manual tests. We ran into a problem when we were faced with the human factor of software development. Our Release Management group, which provide the software packages to the customer, packaged some older version of that stand-alone component we just tested manually. Since we were not expecting the software to change, we had not run those manual tests before bringing the patch to production. Due to confusion while generating the package, it contained too much, where one software component was outdated and led to a production problem – which gladly could be fixed by some high responsible developer. My resolution here yesterday was, that I had come across Technical Test Debt. Since we were not expecting the software component to change after the first approval from our customer and test automation seemed to be a high cost at the time we made the decision, we refused to do any automated test cases, that would verify the software for each delivered package. We were pretty aware of this issue, but we did not take the time and effort to pay off this Technical Test Debt and have the software component forseen with automated test cases, that would run over each of our created package, before screwing up the production system.

Another thing I would like to call Technical Test Debt concerns test automation as well. During the first days at my current company, we used shell scripts for test automation. There was a quite impressive set of shell-script functions, which in combination automated most of the work for regression tests. The debt here is similar to Technical Debt in software: Since no experienced designers were assigned to the group in order to lead future improvements into a manageable direction, the shell-function codebase grew and other the rushes of several projects noone really got the time assigned to pay off this Technical Debt in the test automation codebase. After some internal reorganisation my team took the hard line and exchanged our legacy shell-function based test automation framework through FitNesse. By doing so we managed to gain an order of a magnitude improvement by keeping Technical Debt in the automation codebase as low as possible. We used regular refactoring, unit tests to communicate intent and pair programming as key factors there, but I also see some short-cuts currently on one class or the other, where I directly know, that this Technical Debt was introduced by the first point from the list above.

While I’m pretty much convinced, that the first of my two points is clearly Technical Test Debt, the second point is discussable. I like to define test automation as a software development effort with all the underlying assupmtions on the automation code. While picking this view, I would say that the second point is just Technical Debt, that occured during a software test automation. Another thought that strikes me on the first point, is that Technical Test Debt in terms of unmade tests, might have high risks attached to it. That’s why a tester needs to know about risk management in the software under test.

Innovations for the New Software Engineering

During the last weekend I went on a IT professionals seminar. On Saturday evening there was a panel discussion, on which innovations are going to be expected from the IT world during the next years. While hearing the participants and some leaders from some bigger german companies, I got struck. The question that came made me think, what the biggest thing that could happen could be. Some time during the question, for cost-center vs. profit-center thinking it came to me, that the most obvious innovation seemed to be ignored. I decided to share my insight here as well:

Involvement of the customers and stakeholders in the Software Engineering

How do I get to this obvious innovation? It is no solution to gather requirements for several weeks and months, going back into the black-box of programming to build a system after a plan, that gets anyways blown up, deliver partial tested software just to find out, that the 20 percent of value for the actual user of the product are not included. If I want to get a new suit that looks nice, then I go the dress maker and participate and contribute to his work. Of course he can measure my lengths and go into his black-box of dress making and deliver the right dress for me. Really? No, there might be some try-on iterations wortwhile in order to get what I would like to have anyways.

I’m pretty much convinced: Only if customers and stakeholders understand this lesson I learned from Agile Software Development, will this contribute to the Software Enginnering. It is not enough to realize that there is something I have to specify and get something delivered afterwards. Like I have to give a dress maker my input of what I would like to have, like the color of the cloth he should use or the material of it, the Software customer has to participate in the definition of the software system he would like to be built.

Likewise if I go into a shop and ask for a suit, a good salesman will ask me questions in order to get to know, what style I prefer. Like I participate in this gathering by giving the right answers, in the software world it is the same. Software Engineers are not staring into the crystal ball to get to know what the system shall give as advantage to its users. (Though there is a family of methodologies named after the first of the two words, but this is not the intended meaning behind it.)

Likewise it is natural that there are some iterations necessary in order to get the right system. This is true for dress making like for Software Engineering. One has to look over the built product in order to see if it fits the needs. Due to high project lengths of software projects, there be a changed business situation, so that the product does not fit anymore. Maybe I was wrong in first place, when trying to guess what I would like to have – that’s because I as a customer even cannot take a look into my own crystal ball either.

Agile goes a step further. Agile asks the customer not only for participation but for commitment to the product that is being built. This is the case of for Iteration Demonstrations of the working Software each other week. This is the case during the iterations while working about complicated business rules of it. For the dress maker comparison there are no real complicated business rules behind the product he builds. This is the part where craft comes into play. The dress maker learned how to tie together some cloth to get a suit for me. The Agile toolkit does this for Software Engineering to some degree.

Last but not least I would like to denote that heavy customer and stakeholder involvement is not all of it. But at the current time it is the biggest first step to go in order to get real innovations out of Software Engineering. Like Pair Programming and Pair Testing while having the right input at the right time more than doubles the output. This is even true for real customer involvement.

Craftsmanship over Execution

Currently there is a lot of discussion ongoing on the causes of failing agile teams. After watching these discussions for quite a while now, here is my view of the topic. The brief quint-essence is: If you’re doing crap, it doesn’t care whether you call it Agile or not – you’re just doing crap. If you’re doing a good job, it doesn’t care whether you call it Agile or not either – you’re just doing a good job. About a year ago I started to take a look on how Agile Software Development is done, especially under the circumstances of testing on Agile Teams. Since I realized that my company was struggling with delivery based on waterfall projects and non-cross-functional teams, Agile seemed to be a solution worthwhile to take a look on. Since introducing change in working habits is a large effort and I was not in a key position to introduce this, I started especially focused on the mind-set behind Agile Software Development and the sucess aspects of it. Incrementally I achieved to introduce changes based on the practices from XP – but for several reasons I was confident, that I could not introduce all aspects of it. Today I would say that I did not take the courage to just change stuff and come up with the success story afterwards.

The changes I started to introduce in my company where tiny. We were suffering from shell-scripted test automation, that was very interface-sensitive. A change in one of our software components could lead to a full regression test result disaster, our testing team being unable to give the stamp of approval on it safely. Manual testing was no practical option, since it would have lead to many weeks of efforts for that, but the bugfixes we had on our desk were critical for our customer. My team decided to reimplement test automation with a new approach and we decided to use FIT for it.

The next six weeks were spent focussing on the highest priority business use cases. We were able to build a new framework for testing on it. While I was reading The Art of Agile Development from James Shore and Shane Warden, I brought in many aspects of the practices and underlying principles and values into my team. Though we were not implementing the full set of practices, we were quite successful over the next few months and finally finished our energized efforts with a test framework, so that we were able to run our group during vacational time with just 1.5 persons, while we were unable to hold the pace one year before with 10 persons.

In the meantime one of my colleagues as well introduced a testing framework based on FIT but with a different approach. While my team used Java, his used Python. While my team focused on the readability of the tests from a business perspective reducing all visible test data to the necessary level of detail, his team put every little detail for the tests in there – whether they contributed to the expected results or not. From organizational point of view it was decided, that we needed to focus on just one programming for easier job advertisements. We came up with the solution to use Java as programming language, since most of our developers have either a Java background or are Java Developers. We started to move the previous Python classes to Java, without having the people introduced to Java beforehand. I did not that hard participate in the migration of the classes and see today that there is bunch of technical debt introduced. From my point of view there are two flaws that result from neglecting the craftsman aspect of our work: the table structure and the Java class code.

While I had read the book on FIT before starting to do anything, my colleague just started. Since he did not get the recommendations from people who have invented the framework and the problems they see on several topics, the tables are interconnected and while reading them I get a brain-twister. The second problem was, that we were forced to stick to the wrong table layout while getting the fixture code from Python to Java. As well since the people doing the work were new to Java, they did not come up with Pair Programming, Unit-Tests for the fixture code and no reusable design in the mind. Getting the toolset now ready for the next customer, will be an immense amount of work, if we agree to stick to the table layout, from my point of view.

The conclusion from that lesson for me is, that Robert C. Martin is of course right. With proper introduction to Java, JUnit and FIT the solution would be better today. We did not say, we were implementing Agile. We just focused on the parts of Agile, that helped us very much. The first approach from my side was a success, because we were simply doing a good job with future opportunities in mind and did just the necessary stuff skipping most of the YAGNI implementations. We were not doing Agile, we were not calling it Agile, we were just doing a good job. On the second step, we were not doing Agile, either. We were not calling it Agile, either. We were just doing a bad job, not paying attention to the skill-set of the people doing the implementation, not looking over the code before or even directly after submitting, no Pair Programming. We were doing a bad job, that’s it.

As stated intially, it does not matter, whether you do a good job and call it Agile or not, or whether you do crap and call it Agile or not. In either case you either succeed or fail, no matter what name you give that child.

Further reading:

Skillset development

Lately I exchanged some thought with Alistair Cockburn on his new side over e-mail. Therein I introduced him to some thoughts that raised in my mind while reading through Agile Software Development – The Cooperative Game and Extreme Programming Explained and he told me to try the terms out for a while. Therefore I decided to introduce the thinking behind it for everyone. To introduce you in the topic I will first explore some definitions from the Extreme Programming (XP) and Agile world and try to relate them to the thoughts from the Cooperative Game.

Extreme Programming

This chapter describes the initial thoughts from Kent Beck on the relationship between practices, principles and values in XP. The initial of view of Kent are complemented with the thoughts from James Shore.

Practices

The XP practices build up a key set of actions to follow in order to get into the XP way of thinking. Practices are the entry point for teams new to XP. They build the first insights on how work is being done in the XP way of thinking. Practices are clear, so that everyone knows if they are followed. Practices are situated in that they just tell you what to do in a particular circumstance. They are not applyable in the rest of your life. In the second edition of Extreme Programming Explained Kent distinguishes between two levels of practices: the Primary and the Corollary.

The Primary Practices are introduced to get started right from scratch. These build up practices everyone should be able to start with right now without any dependence – beside the will to apply them. Among these practices there are advises like Sit Together, Whole Team, Pair Programming, Slack, Ten-Minute Build and Continuous Integration – just to name a few. The Corollary ones build up practices which might be difficult to implement without following the Primary ones. Among these there are Team Continuity, Code and Tests, Single Code Base or Daily Deployment.

Principles

As Mr. Beck describes in a wonderful picture, since practices and values are an ocean apart, principles build the bridge between them. The principles make the clear practices relate to the universal values. Principles make the XP team realise, why they are doing a particular practice and lead to the higher-level, universal values behind these. Among the XP principles there are Humanity, Mutual Benefit, Reflection, Flow, Quality or Baby Steps – just to name some examples.

Values

Values form the universal underlying part of the practices. XP values Communication, Simplicity, Feedback, Courage and Respect. The underlying value of communication, courage, respect and feedback highly motivate to apply sitting the whole team together as near as possible for example. The principle of humanity demands this. Without the values behind the practices, the latter would be tedious activities fulfilled just since it is denoted somewhere. Values form the motivating factor of the XP practices, while principles give reasons to apply them.

Agile

As Robert C. Martin wrote on Object Mentor summarizes how XP relates to the four Agile values and the twelve principles mentioned in the Agile Manifesto. Alistair Cockburn writes, that the people who signed the Agile Manifesto did not dare to go deeper into the materia and try to agree on common practices. Robert C. Martin perfectly describes, why the XP practices meet the Agile values and principles, but XP is just one of several applications of the Agile values and principles turned into practices.

The Cooperative Game

Alistair Cockburn introduces in the Cooperative Game the ShuHaRi levels of skillset development. The small words Shu, Ha and Ri are taken from the Aikido domain and Alistair maps them to the skillset in software development.

Shu

Shu means to keep, protect or maintain.

In the Aikido domain the Shu level is related to a student just beginning to learn the techniques. Therefore the Shu level is the entry point for new skills to learn. The student is taught to apply the techniques by copying them and follow the teacher’s advises by his words. The protection on the Shu level is to avoid early distraction of the student by the several existing techniques around. The student shall be able to focus on practical advises – on practices.

Ha

Ha means to detach and means that the student breaks free from the traditions.

After following the practices, the student starts to question the practices rather than doing just repetetive activities. At this second level it is realised what the underlying purpose of the initial teachings is. In the human grow-up process this stage could be compared to the puberty of teenagers, who start to question the practices they initially just copied from their parents and other children in their surrounding environment.

Ri

Ri means to go beyond or transcend.

On the third and last stage the student adapts the previously learned practices to meet his own style. While applying the principles she just became aware of, the underlying values are realised and followed.

Conclusion

Basically my current picture of Agile, XP and ShuHaRi consists of practices to follow on the Shu entry level of skillset development. When trying to get an Agile mindset, you need to start with particular practices on the Shu level. Since you just start to get into it, you need to learn the practices by the word in order to not get distracted by your previous working-habits and values.

On the Ha level there are principles which give you the reasons for particular practices. When the applications of the underlying practices got firm into your mindset, you will begin to question particular activities. You will only get to realise the underlying principles, if you have followed the practices on the Shu level long and thorough enough. This is the point in time when adaptation has taken roots. You should be now in place to see the larger picture behind your working-habits and be able to begin to change some of the details and try to fit variations of the practices to your particular circumstances. You should be able to start adaptation. For this phase to start it is essential to have followed the practices by the book – as strinkingly denoted in How To Fail With Agile.

After realizing what does work and what not through adaptation, you will be able to see the universal parts of your new skills. On the Ri level you will start implementing the values naturally without even thinking about them.

To conclude my proposal, the Ha level of skillset development builds a bridge between the Shu and the Ri level. On the Shu level you follow the practices by the book, on the Ha level you start to realise the underlying principles of the particular methodology you’re following and on the Ri level you see the whole world in values and start adapting your own style. This is the quintessence.

Outlook

Since I have a strong background in software testing, personally I would like to try to define practices on the Shu level, principles on the Ha level and values on the Ri level for software testers. Elisabeth Hendrickson, Lisa Crispin and Brian Marick did very good thoughts on the theses topics, but I have not seen the ideas structured as I tried to sketch here. If my proposal turns out to be valueable, this might become future work for Agilists.

Since I asked Alistair Cockburn to do a review of the text, he already was able to add a note on his blog on it. Thanks Alistair.

Everyone should read The Cooperative Game

After slipping through some of the blog entries from Alistair Cockburn, I found the following entry: Everyone should be a methodologist. After slipping through the errors of early methodologist in part 2 of that entry, I realised, that I personally ran into point 2. and 3. lately. Likewise I see many times points 4 and 5 forced in my company. Thanks Alistair for reminding this to me. If you got interested by the content, I recommend reading Alistair’s book Agile Software Development – The Cooperative Game, which covers many, many more aspects.