Since I work in a more traditional orientated environment, I’m facing some questions regarding the usage of test frameworks such as FitNesse, Robot Framework or Concordion. The biggest problem I hear very often is to start implementing fixture code before any test data in terms of test tables or html pages is available. Directly the scene from J.B. Rainsberger’s Integration tests are a scam talk comes to mind where he slaps the bad programmer’s hand. Bad! Bad, bad, bad! Never, ever do this. So, here is a more elaborate explanation, which I hope I may use to reference to my colleagues.
So, the first thing is to pick up the classics on the topic and check the documentation about it. So, let’s start with the classic FIT for Developing Software. So, the structure of the book is separated into a first part mentioning the table layouts, in the second part goes into detail about the classes for the implementation. So, Ward Cunningham and Rick Mugridge seem to follow this pattern. Great. Next reference, Bridging the Communication Gap. Gojko introduces there specifications workshops and specification by example. Both are based on defining the test data first, and later on automate them. This helps building up the ubiquitous language on the project at hand.
But there is more to it. Since test automation is software development, let me pick an example from the world of software development. Over the years, Big design Up-front has become an anti-pattern in software development. Though there are some pros to it, on the con-side there are that I may try to think about each and every case which I might need for my test data, but I may be wrong about that. So, just in case you are not from Nostradamus’ family, thinking about your design too much up-front my lead to over-design. This is why Agile software development emphasizes emergent designs and the simplest thing that could possibly work. Say, if I work now on ten classes, which I completely do not need when the test data is noted down, then I have spent precious time on building these – probably even without executing them. When later on the need for twenty additional classes arises, the time spent on those initial useless ten classes cannot be recovered to cope up. Additionally these ten classes may now make my suffer from Technical Debt, since I need to maintain them – just in case I may need them later. Maybe the time spent initially on the ten useless classes would have been better spent on getting down the business cases properly in first place – for those who wonder why your pants are always on fire.
Last, if I retrofit my test data to the available functions in the code, I have to put unnecessary detail into my tests. The FIT book as well as the Concordion hints page lists this as a malpractice or smell. For example, if I need an account for my test case and I am retrofitting it to a full-blown function call which takes a comma-separated list of products to be associated with the account, a billing day, a comma-separated list of optional product features and a language identifier as parameters, I would write something like this:
create account | myAccount | product1,product2,product3 | 5 | feature1,feature2,feature3 | EN |
When I can apply wishful thinking to my test data, I would be able to write it down as brief as possible. So, if I don’t need to care about the particular products and features sold, I may as well boil the above table down to this:
create account | myAccount |
In addition to this simplification think about the implications a change for create account in the example above would have, when I need to a add a new parameter for example a the last billed amount for that account. If I came up with six-hundred test tables by the time of introduction of this additional feature, I would have to change all of those six-hundred tests. This time for changing these six-hundred tests will not be available to test the application. Wasted – by my own fault earlier!
In the end, it boils down to this little sentence I used to describe this blog entry briefly on twitter:
When writing automated business-facing tests, start with the test data (the what), not the fixture to automate it (the how). ALWAYS!
I think in terms of user’s intentions always. What is the minimum information a test can provide to express the user’s intentions? The users intentions are likely to form the most stable interface in terms of data in and out of a system. Those intentions can be implemented in many ways, but generally their intentions are the same.
I’m working with Concordion at the moment. I encourage our testers to write the Specifications first against the user’s intentions. This provides all the data in and out but doesn’t say anything about how that data will be provided to the system under test, or how verification data will be provided back to the Specification.
Once the specs are stable and have been reviewed we then look at developing the code against the backdrop of these specs. Simply having these specs in place first really clarifies things without committing to code. It also really sharpens up any corner cases.
We then write the code. We then “bind” the Specification to a particular implementation via the test fixtures. This binding layer is really important for me. We can change the binding when the code changes but the user intentions remain often unchanged.
Hi Neil,
those are very good points. Ideally I would write down the specifications without any particular framework in mind, and later on automate it with anything that crosses my paths. I think Concordion initially had exactly this vision. RobotFramework is similar there.
What I find striking is the degree testers get used to dealing with unnecessary implementation details. When confronted with business-facing tests, my colleagues directly dive into the How to automate them, even before getting the What focused. Reframing the story or the test to the need of the end-user can help to achieve this focus.
Thanks for your hint.