At the recent meeting of Hamburg’s Softwerkskammer – the German Software Craftsmanship movement – we worked through a Coding Dojo on the Roman Numeral Kata. Michael Norton wrote today about one piece that worried me as well. I think Michael did a fantastic job on tackling a different approach. But he reminded me that I wanted to put up some of the thoughts from the Coding Dojo.
We had about 12 participants in the dojo. After explaining some pieces about the format and the kata, we started the dojo. As the kata started, we had one participant asking questions up to the degree that the pair in front of the keyboard stopped doing anything.
Eventually we got the team back up to continue working on the problem. The claim of the interrupter was that we didn’t yet understand the problem well enough to design the solution. Another claim was that TDD was not a design technique. Let’s take a closer look at both of these.
In November I had the opportunity to stay a whole week with Kent Beck. it-agile GmbH invited him for two courses – Responsive Design and Advanced TDD – and one workshop to Hamburg, Germany, and I took both courses and the workshop. Today I was contacted by Johannes Link who was surprised not to find a write-up of this week on my blog. It turns out somewhere during the past year I have turned into a reporter. So, here is my summary from what I could get from my notes. Initially I planned to write it via email to Johannes, but then I though why not share those comments on my blog. Maybe others are looking forward to it.
Four months have nearly past since I started my new job at it-agile GmbH. Lots of things have happened since then. I got to know many teams, I learned lots about design, architecture, test-driven development, and also about testing. This blog entry is about the experiences I made since September in teaching ATDD, – I deliberately name it ATDD since I haven’t found a more suitable name, but I know that name should be replaced with something different – and what I plan to work on in the next year.
Jason Gorman put up a video on the Open-Closed principle out on his blog today. I claimed on Twitter, that he was doing a refactoring while having a red bar. Over the discussion, I decided to put up the way how I would have developed the fibonacci extension while refactoring on a green bar on my blog.
Since I work in a more traditional orientated environment, I’m facing some questions regarding the usage of test frameworks such as FitNesse, Robot Framework or Concordion. The biggest problem I hear very often is to start implementing fixture code before any test data in terms of test tables or html pages is available. Directly the scene from J.B. Rainsberger’s Integration tests are a scam talk comes to mind where he slaps the bad programmer’s hand. Bad! Bad, bad, bad! Never, ever do this. So, here is a more elaborate explanation, which I hope I may use to reference to my colleagues.
So, the first thing is to pick up the classics on the topic and check the documentation about it. So, let’s start with the classic FIT for Developing Software. So, the structure of the book is separated into a first part mentioning the table layouts, in the second part goes into detail about the classes for the implementation. So, Ward Cunningham and Rick Mugridge seem to follow this pattern. Great. Next reference, Bridging the Communication Gap. Gojko introduces there specifications workshops and specification by example. Both are based on defining the test data first, and later on automate them. This helps building up the ubiquitous language on the project at hand.
But there is more to it. Since test automation is software development, let me pick an example from the world of software development. Over the years, Big design Up-front has become an anti-pattern in software development. Though there are some pros to it, on the con-side there are that I may try to think about each and every case which I might need for my test data, but I may be wrong about that. So, just in case you are not from Nostradamus’ family, thinking about your design too much up-front my lead to over-design. This is why Agile software development emphasizes emergent designs and the simplest thing that could possibly work. Say, if I work now on ten classes, which I completely do not need when the test data is noted down, then I have spent precious time on building these – probably even without executing them. When later on the need for twenty additional classes arises, the time spent on those initial useless ten classes cannot be recovered to cope up. Additionally these ten classes may now make my suffer from Technical Debt, since I need to maintain them – just in case I may need them later. Maybe the time spent initially on the ten useless classes would have been better spent on getting down the business cases properly in first place – for those who wonder why your pants are always on fire.
Last, if I retrofit my test data to the available functions in the code, I have to put unnecessary detail into my tests. The FIT book as well as the Concordion hints page lists this as a malpractice or smell. For example, if I need an account for my test case and I am retrofitting it to a full-blown function call which takes a comma-separated list of products to be associated with the account, a billing day, a comma-separated list of optional product features and a language identifier as parameters, I would write something like this:
create account
myAccount
product1,product2,product3
5
feature1,feature2,feature3
EN
When I can apply wishful thinking to my test data, I would be able to write it down as brief as possible. So, if I don’t need to care about the particular products and features sold, I may as well boil the above table down to this:
create account
myAccount
In addition to this simplification think about the implications a change for create account in the example above would have, when I need to a add a new parameter for example a the last billed amount for that account. If I came up with six-hundred test tables by the time of introduction of this additional feature, I would have to change all of those six-hundred tests. This time for changing these six-hundred tests will not be available to test the application. Wasted – by my own fault earlier!
In the end, it boils down to this little sentence I used to describe this blog entry briefly on twitter:
When writing automated business-facing tests, start with the test data (the what), not the fixture to automate it (the how). ALWAYS!
Over the last two days I took the opportunity of silence at work to be able to work focused. Since I’m currently reading Growing Object-Oriented Software, Guided by Tests from Steve Freeman and Nat Pryce I tried out their approach while retrofitting some unit tests to some of my test classes.
It took me between one and two hours after lunch to get the first class under test. Implement a unit test, bring in the necessary support code, run the test, make it pass after dealing with some stupidities on my own, then go on. In the end when I felt I was done and couldn’t think of yet another test (I also had checks for exceptions in place by that time), I started a code coverage build to see, which passages of the code I was missing. I got surprised to see that all the tests passed and I had brought in 100% code coverage. 100% line coverage, 100% branch coverage, there wasn’t anything left for the unit testing on that class. Since we also used the class for quite a while I knew it was functioning in our test harness, so I was done with it. I checked in the new unit test and drove home happily. That feeling of pride after polishing up your Technical Debt was a great moment. It felt (and still feels) great.
One thing actually stroke me: My tests were very clear and readable. I was tempted to show them everyone. Since the offices are rather empty during this time of the year, there was no one I could annoy with my pride. I got reminded on Enrique Comba-Riepenhausen on the Software Craftsmanship conference in London in February. He stated during Adewale Oshineyes session, that his customer were on-site to the degree, that they were able to actually read the production code. I felt that I had just a piece of code that my customer would be able to read. (I don’t think so, in retrospection.)
The essence of writing maintainable automated checks lies in writing them clear. Indeed, you should write your test code more clearly than your production code to keep them vital and in shape to serve you during your further development efforts. Unfortunately, teams underestimate the value their test automation brings them when written and maintained properly. There is not much magic about it. But you better take care for your test automation code, or you’ll get trapped in the automation pitfall.
Dale Emery influenced this blog entry with his paper on Writing maintainable automated tests. At the essence I’m going to compare several test approaches: Unit Testing (i.e. Test-Driven Development), Functional or Acceptance Testing and Exploratory Testing. I’ll look at means necessary for regression capability (checking), costs for testing and costs for adaptation to changing requirements.
Agile Testing Days Website Here is a random collection of things I noticed on the Agile Testing Days which I left out of the previous entries on it. Mostly these refer to people being good at looking around, like Alistair Cockburn pointed it out.
Agile Testing Days Website On the final day of the Agile Testing Days my presentation was due. Since it was my first conference presentation ever, I was very stagefright about the course. Though, here is my write-up on the other presentations I visited.
Agile Testing Days Website Tuesday after the lunch at the Agile Testing Days there were two Keynotes and in the evening an Oktoberfest celebration. Following is my write-up of the notes I took in that period.