Some weeks ago James Bach began a series on Quality is dead. Since up to now he did not yet write up more of it, I get in touch with him via instant messaging. He explained to me that quality is dead, it cannot be brought back to live. How come?
Continue reading Testing quality into the productCategory Archives: Methodologies
Methodologies
Reactions on Tom deMarcos article in IEEE
Here is a wrap-up of the blog entries and articles that were written as a reaction on Tom deMarcos article in the IEEE.
Misunderstood metrics
My Miagi-Do school mentor Matt Heusser placed a blog entry on metrics today. Since I haven’t got the clear problem with metrics, I needed to contact him to fulfill The Rule of Three Interpretations. In our conversation I realized that he was referring to a concept, which I haven’t been experiencing in my three years of working as a software tester.
Generally spoken I was referring to metrics as using FindBugs, PMD or cover coverage tools on your software. For some time we have been using these for the framework that we grew on top of FIT for testing our software product. In combination with Continuous Integration you can see improvements and you can see where your project currently is. Is it in a good shape? Where might be holes? Which codepaths are not tested well enough? This feedback is essential for management to make the right decisions about the risks when delivering the product against your test results.
On the other side Matt refers to metrics on a different level. If your annual reviews and your salary gets decided upon management metrics, these are evil. The struggle I have is, that I am in the situation that I never worked in such a system where my personal performance and salary was based on some metric. Basically I can think of situations where this is evil. Here are a few:
- A software architect getting paid by the number of architectural pages written.
- A software developer getting paid by the number of lines of code.
- A software developer getting paid by the number of software products finished.
- A software tester getting paid by the number of tests executed/automated.
- A software tester getting paid by the number of bugs found.
Speaking as a software tester, I would use a tool for generating the easy test cases that are easy to automated (I have been down this rabbit hole, I just realize, but wasn’t getting paid based on that) or use a spell-checker on our logfiles (I always wanted to do this, but didn’t because there are more severe problems than the correct spelling of some debug log messages). As a colleague uses to put it: When you measure something, you change the thing you’re measuring. Be careful what you measure, because it just might improve. When I understood these different meanings of metrics, I also got the problem.
Some time later I found the origin for the discussion. It is a recent statement from Tom DeMarco reflecting over 40 years of software engineering and measurements. Take the time to read. Here is the portion which I found most interestingly:
So, how do you manage a project without controlling it? Well, you manage the people and control the time and money. You say to your team leads, for example, “I have a finish date in mind, and I’m not even going to share it with you. When I come in one day and tell you the project will end in one week, you have to be ready to package up and deliver what you’ve got as the final product. Your job is to go about the project incrementally, adding pieces to the whole in the order of their relative value, and doing integration and documentation and acceptance testing incrementally as you go.”
I believe that this will work. At least it will keep the team from being micro-managed and over-measured.
Mindful readings about Software Craftsmanship
While looking through my personal backlog of blog entries, I found this one today. It cites a quotation from Uncle Bob Martin in one of his blog posts in April. Here is the quote:
I see software developers working together to create a discipline of craftsmanship, professionalism, and quality similar to the way that doctors, lawyers, architects, and many other professionals and artisans have done. I see a future where team velocities increase while development costs decrease because of the steadily increasing skill of the teams. I see a future where large software systems are engineered by relatively small teams of craftsmen, and are configured and customized by business people using DSLs tuned to their needs.
I see a future of Clean Code, Craftsmanship, Professionalism, and an overriding imperative for Code Quality.
The related article was named Crap Code Inevitable? Rumblings from ACCU. Today I remember, that I wanted to quote that article by that time to be a mindful reading. After having read over it again, this point is still pending.
First, the mentioning of doctors reminds myself of a visit at my doctor in May. I had a problem raising my arm after having exercised too much. After initially stating the problem, my doctor told me to stand up, raise my arm this way, rais my arm that way, raise my arm in another way, and then he had identified the problem. This was amazing when I realized that this way of analyzing a problem in the software is not as efficient. On the one hand it took him no more than 5 minutes to find the cause. On the other hand I realized his level of expertise at this. Clearly I doubt that there was a course back in university held, where my doctor learned this. Basically I consider that he knew how the muscles and fibers are connected with each other. But I clearly doubt that back in his university times there were practial courses where an injured patient with a problem in his arm like myself was asked and evaluated in front of the students. Likewise, even though I did not have a course on test-driven development, but I can take the conscious decision to apply it and communicate my intentions to my colleagues. For this to work I take my personal experiences with TDD and simply do it. Similarly this applies for Acceptance test-driven development. Every day anew I can take the decision to give the best I can in order to delight my colleagues and my customers. Personally I consider this to be an act of professionalism.
On the other hand the quote from above reminds me also about a problem I have just lately read about on Twitter from Brian Marick:
I detect a certain tendency for craftsmanship to become narcissistic, about the individual’s heroic journey toward mastery. People who think they’re on a hero’s journey tend to disregard the ordinary schmucks around them.
Heoric journeys are a problem. Mostly I refer here to an insight from Elisabeth Hendrickson and a work which I think was from Alistair Cockburn, but don’t know for sure anymore. The problem with our education system is that during school you’re the one that fights on your own during the exam courses. In the university it’s your work that gets graded. For PhDs this is even more dramatical (as I have been told, no personal experience with this, though). Then when you get into your first job, you are asked to do team work. But where should you have learned this? The whole value system that worked all of your life gets collapsed. So, what do you do about this? People being “inconsistent creatures of habit” create their walls around them making their work safe against the rants of others. But – and now comes my reply to Brian’s statement above – the Software Craftsmanship Manifesto states differently. Software Craftsmanship is about taking apprentices, teaching what you have learned, what has worked for you, build a community of professionals for valueable exchanges just like the teams from Obtiva and 8thLight has proven to us. This is our responsibility to do. This is professionalism in the sense of Software Craftsmanship and it’s among the things we value.
Testability vs. Wtf’s per minute
Lately two postings on my feed reader popped up regarding testability. While reading through Michael Boltons Testability entry, I noticed that his list is a very good one regarding testability. The problem with testability from my perspective is the little attendance it seems to get. Over the last week I was inspecting some legacy code. Legacy is meant here in the sense that Michael Feather’s pointed it out in his book “Working effectively with Legacy Code”: Code without tests. Today I did a code review and was upset about the classes I had to inspect. After even five classes I was completely upset and gave up. In the design of the classes I saw large to huge methods, dealing with each other, moving around instances of classes, where no clear repsonsibility was assigned to, variables in places, where one wouldn’t look for them, etc. While I am currently reading through Clean Code from the ObjectMentors, this makes me really upset. Not only after even ten years of test-driven development there is a lack of understanding about this practice, also there is a lack of understanding about testability. What worth is a class, that talks to three hard-coded classes during construction time? How can one get this beast under test? Dependency Injection techniques, Design Principles and all the like were completely absent on these classes. Clearly, this code is not testable – at least to 80% regarding the code coverage analysis I ran after I was able to add some basic unittests, where I could. Code lacking testability often also lacks some other problems. This is where Michael Bolton, James Bach and Bret Pettichord will turn in heuristics and checklists, the refactoring world named these as Smells.
On the Google Testing blog was an entry regarding a common problem, I also ran into several times: Why are we embarrassed to admit that we don’t know how to write tests? Based on my experience project managers and developers think that testers know the answers for all the problems hiding in the software. We get asked, “Can you test this?”, “Until when are you going to be finished?” without a clear understanding of what “tested” means or any insight what we do most of the time. “Perfect Software – and other illusions about testing” is a book from Jerry Weinberg from last year, which I still need to read through in order to know if it’s the right book to spread at my company – but I think so. If a develoepr doesn’t know about the latest or oldest or most spread technology, it’s not a problem at all. If a tester does not know how to “test” this piece of code, it is. He’s blocking the project, making it impossible to deliver on schedule – escalation! What Misko points out in his blog entry is, that the real problem behind this is also testability:
Everyone is in search of some magic test framework, technology, the know-how, which will solve the testing woes. Well I have news for you: there is no such thing. The secret in tests is in writing testable code, not in knowing some magic on testing side. And it certainly is not in some company which will sell you some test automation framework. Let me make this super clear: The secret in testing is in writing testable-code! You need to go after your developers not your test-organization.
I’d like to print this out and hang it all over the place at work.
Take your time for improvement
Dave Hoover just raised an interesting point on personal improvement: Spend 50% of your working time on personal improvement. Basically I found myself in that blog post. Since about two years – since I have been asked for taking a leadership responsibility – I am constantly reading books in my spare time. Over the last year I started to read several mailing lists and blogs. I am thankful to have married a wife over the last year that is patient with me (I wonder how long she’ll stay that way).
What shruggs myself is the fact that Dave states it takes about five years to see the return on that investment. Five years seems to be a very long time. Reflecting over my life so far I must say, that I have spent more than five years already on improving personally. At the age of 15 I choose to lead a youth group at our local sports club. In my spare time I gave swimming lessons over the past thirteen years and also grew into the leader role at my sports club over the past 15 years. In the meantime – while making my diploma at the university – I was involved in three clubs at a time. Through all these I made great experiences in leadership and organisation. Over ten years I participated and helped to organise helpers for our season openings of our local open air bath. This was a great time and helped myself improve for my current job.
On the other hand I even worked for money at a local store during my university years. There I had to organise the order process for soft drinks, juice and later even alcoholic drinks. I had to organise according to market and seasonal demands and manage the surpluses. In the end there was a major rebuild of the store where I was highly engaged in. Over the years I had learned how to work together with my colleagues and contribute to critical work that is daily business at a local store.
All these experiences were a major part of the preparation for my current position. In the past I have been working with people, for people, sometimes even against people if I was convinced from the opposite. Taking some spare time now to support my journey towards mastery is just a little duty for me. Personally I just took out my calculator and tried to calculate the time that I might have left for my family with Dave’s proposal: One week has 168 hours, 6 hours per sleep each night, two hours for getting to work and back, two hours for eating, 40 hours of work, 20 hours for personal improvement leaves you with 42 hours of time you may spent with your family and for leisure activities like sports. This seems ok to me, but don’t blame me for my naive assessment of the situation. (My wife is working in retail sail and usually does not come home until 9pm due to this, so don’t try to project my situation onto you.) Take Dave’s point into account to at least try it out for a while. Personally I consider myself more of the former group of people in his statement:
I think it would be silly to try to force yourself to do it, you’d end up burning out. Really, I’m talking to two groups of people. To those who are already spending their spare time on becoming a better developer, I want you to know that your efforts will pay off, but understand that it will take between 5-10 years to see the most significant benefits of your efforts. To those who want to become a great developer but hold themselves back out of fear of failure or hard work, I hope to inspire you to give it a shot.
and Dave made me feel good about it.
Grading, evaluation and good code applied to apprenticeships
Today I had the opportunity to talk about some of the skills that I needed to learn while becoming a leader. The girlfriend of a brother-in-law is a teacher for primary school. Today I got into a discussion with her about grades in school. During the discussion I thought on where do teachers get to know how to grade a student. Since I had one in front of me, I took the opportunity to ask directly. She stated that there was no education or course during the time she spent on university on this. In Germany there are two years of traineeship for new teachers mandatory. She stated that during that time she learned just a few regarding the grading process. So most of the grading is done on intuition. Interestingly her description reminded myself of the very first few months of my group leader time, during which I had to do a performance review for one of my colleagues. No one had taught me up to that point how to do it, I just had one review so far for myself. There was no course back in university where I could have learned it. During my last ten to fifteen years I have been a swimming trainer and lead the swimming department at our local sports club for some time, but clearly this was the only opportunity where had experience from leading people – and these situations are less context-driven due to its nature of a teacher to a student. Even today it is still hard for me to reflect whether or not I take the proper intuition into account. (The bad news is that most of your colleagues will not take the courage to inform you of your bad decisions.)
Right before starting to write about this article, I noticed that there is a similar case for good versus bad code. What did I learn at the university about code? What had I learned about bad code? Good code? Most of the insights I have today come from my experiences at work. When I started about three years ago we had an undesigned test automation framework. There were lots of script functions without intent. Whenever I needed to put in something new, I had to go out look for the right place to do so. Additionally I was never fully aware if there already was a similar function that I should have been extended, decomposed, refactored to meet my new requirement. This was a mess. Over the past year I was able to get rid of this grown framework and introduce and grow a new one.
We focussed on good aspects of code there. Using test-driven development, refactoring and regular reflection while evolving our classes. This was a good practice and we even tried to avoid most of the deficiencies from our previous experiences. Over the last year we had some contributions from a team that did not follow your initial approach. The result is that there are currently two ways to do things. When inspecting the code I can now see the difference between good code I wrote, bad code I wrote and very bad code from that I permitted to be included into the code base. (There was an up in the findbugs remakrds, a down in the code coverage and an up in the crappyness trend after the commit operation. The bad code was visible.) The main problem is that my colleagues do not seem to bother, where the broken window rule comes in. There is much work left to do and I know that I will have to take the responsibility to clean up this mess.
The point that is striking for me is, that there is no education on these things that matter most during day-to-day work. You do not get taught the stuff to grade your students. You get able to do so by getting experiences, intuition and your own guts and hunches. Similarly there is no education for a good or a not so good worker. You have to trust yourself here. The same goes for the cleanliness of code. Despite the books I could mention (the one with just that name), there is no preparation in university for clean code. Similarly there is little to no education on good tests or not so good tests. From my perspective a major challenge for craftsmanship – be it software craftsmanship or even testing craftsmanship – is to teach these difference in an apprenticeship. This is our profession and our job to do in the years to come.
Testing in the days of Software Craftsmanship
Matt Heusser came up with the idea of the Boutique Tester. While reading through his article, I was wondering about the same striking question that arises in me since the early days of the Software Craftsmanship movement: Where are testers going to be in the Craftsman world?
Some months ago I started a series which I called Testing focus of Software Craftsmanship, covering Values, Principles I and Principles II and in addition I have planned an entry on Principles III, that I did not conclude so far. Most of the stuff in it I took from Elisabeth Hendrickson, Lisa Crispin, Janet Gregory, Brian Marick, James Bach, etc., etc. In this series I tried to raise an understanding of the testers role in the craft world while just restating the lessons most Agile teams already learned. The reason that I did not yet conclude this series is that I came to the realization that there seems to be no clear understanding of the sticky minded in the craftsman worldview for me.
Reflecting back on the lessons I learned from Agile methodologies like extreme Programming, Scrum, Crystal or Lean I remember myself being a bit jealous. Programmers were taught to use test-driven development, pair programming, big visible charts, planning poker. For me they seemed to have most of the fun. The tester simply does testing – the stuff they always do. As long as they don’t get into the way, this is wonderful… It was striking first, but when I realized that there are definitions of Exploratory Testing, Pair Testing, Retrospectives, etc., I got aware that testers also have fun activities.
A while back I proposed to rename the Agile testing school to the example-driven one. After getting some feedback from peers, I realized that there seem to be no room for school-mindsets in the Agile world and care for naming in the other schools. My initial motivation to rename the Agile school was driven by the realization that the Agile view on testing seem to be a good one for craftsmanship either. While my reasoning was based on poor assumptions, I started to realize a bigger argument behind this.
The bigger argument behind all three previously described situations is from my point of view that Software Testers have defined their craft long, long ago. Jerry Weinberg wrote in December about the way unit testing was done in the older days. Over the years there have been several thought-leaders appeared for Software testing. (I refuse to name a few here, since I know I’m too few in business to give honor the right ones.) Over and over the craft of software testing has been discussed, defined and found anew as the latest approaches to Agile testing teaches us. Just a few months back there was a heavy discussion on why Agile testers just define the terms that context-driven testers already teach since decades.
What is the difference between Software Craftspersons and Testing Craftspersons then? Testers know their craft from decades of experiencing and definition. There are more than enough books out there that teach good software testing just as not so good techniques. Software testers have most of their set of tools together to start leading their craft. For the development world this is not as true. Over the decades programming languages have come up and fallen down. Programming techniques like assembler, structured programming languages, logical programming languages, object-oriented programming languages, functional languages. The craft of software development seems to be more fluent to me as the craft of software testing. You can nearly apply all the testing techniques found in the books to your software, even to real world products as an Easy Button.
My point is that Matt raises a good point in his Boutique Tester article. Testers can directly go out and provide their testing services, gaining experiences over the years in order to improve. Maybe they can also take apprentices to teach the lessons they learned over the years. Basically I assume his proposal could work, though I’m concerned whether I would like to view myself more as a tester-developer or more as a developer-tester. In the end I think what matters most, is that there is only us and testers and developers form some kind of a symbiosis, just as automated and manual testing does. It’s a good thing that we’re not all generalists, but we should keep our mind open for different views of the problem at hand to compose an improved solution to it.
Three stone cutters
Ivan Sanchez put up a blog entry on a parable: Three stone cutters. He challenged my thoughts with the following question:
Sometimes I have this feeling that we as professionals are frequently trying to be more like the [“I’m a great stone cutter. I can use all my techniques to produce the best shaped stone“] stone cutter than the [“I’m a stone cutter and I’m building a cathedral”] one. Am I the only one?
Due to the analgoy I felt the urge to reply to this question with Elisabeth Hendricksons Lost in Translation and Gojko Adzics Bridging the Communication Gap. Most of the stone cutters I meet in daily business think they are creating a cathedral, while building a tower of babel. The biggest obstacle to efficient project success lies from my point of view in communication among team members.
This principle was discovered in his research on successful project in the 90s and early 2000s by Alistair Cockburn, see for example Methodology Space or Software development as a Cooperative Game (Warning: don’t get lost on his wiki site just as I did several times). Software Development is done for people by people. Since people are strong on communicating and looking around, software development as whole should strengthen this particular facet of us. A cathedrale is of no use if the entrance is on the top of the highest tower. Though during day to day work I see software developers lacking early feedbacks, that do not see the problem.
The industry I’m working in has a high changing rate of requirements. In the mobile phone industry there really are impacts from competitors, that you will have to react on to stay in business. For me this means to reduce the time to market for the product we’re building as massive as reasonable from a quality perspective. This also means to communicate with our customers spread all over the world in different time zones – even in distributed teams. This means that I need to find technical and motivational ways to bring people together that do not see each other on a day to day basis in the office. Just last week I we performed a Lessons Learned workshop on one of our products, where our colleagues in Malaysia took there web cam the first time and showed us how the world was looking outside their office. (It was during a refreshening break, so don’t get wild on wasted resources, my dear boss.) This lead to personal touch and a lower personal distance between the two teams – at least during that session.
Anyways I wonder why these moments happen too little. Instead I see teams – erm basically groups of people – working without communicating with each other though they need some information from the co-worker directly near to them and wondering in the end why there is an integration problem. Another point is the problem of open interfaces (lacking communication of protocols and the like), open scope (lacking the communication of content of the project), open everything. Clearly, for me it seems there is no way to communicate too much, it’s more the opposite that I bother about.
My Renaming attempt
After a discussion on the Agile-Testing mailing list, I decided to give up the proposal to rename the Agile School of Testing. Erik Petersen put it in such a good way, that I fully agree with, so I decided to quote him here:
The schools as defined by Brett in soundbites are:
Analytic School
sees testing as rigorous and technical with many proponents in academiaStandard School
sees testing as a way to measure progress with emphasis on cost
and repeatable standardsQuality School
emphasizes process, policing developers and acting as the gatekeeperContext-Driven School
emphasizes people, seeking bugs that stakeholders care aboutAgile School
uses testing to prove that development is complete; emphasizes automated
testingI see no evidence in those descriptions that the Agile school has a monopoly on examples. All of these schools choose examples to demonstrate that a system appears to deliver their interpretation of functionality at a point in time, with differing degrees of attention to context and risk. I believe the Schools idea was originally intended to describe groups who tended to favor their ideas (dogma?) over others and focused mainly on functional testing, and when the original Agile School was named, it claimed to be replacing all the other schools. This has since changed considerably, and with new techniques such as Mocking and Dependency Injection and a focus on refactoring (CRAP metric anyone?) I would argue that Agile is much more about design and development aimed at simplicity (YAGNI), of which automated testing is only a part, rather than a specific School of (functional) testing. As I have said before, schools tend to manifest themselves in organizational culture and IMHO are relevant for discussion purposes only. Testing can involve many ideas, some of which are typically associated with schools, and depending on context and risks, testing can draw from all of them. My 2 cents.
Part of the problem is what the earliest schools have become, not what they started as. The original articles on waterfall in the early 1970s stated that just going from dev to test in one step never worked and needed to iterate between the two, but that got lost. In the mid 70s, Glenford Myers in amongst all his “axioms of testing” said that tests need to be written before being executed (because computers were million dollar machines and time was money, no longer such an issue) but he also said stop and replan after some execution to focus on bug clusters, and that also got lost. We need to be open to new ideas and weigh them against our current ones, based on their value and not the perceived baggage they bring from a particular school . Enough of the examples! [grin]
So in a sentence, I agree with Lisa’s posts and Markus’s later post about combinations of techniques (quote “Thinking in black and white when thinking of the schools of testing leads to misconception”), but Markus please ditch the Agile school rename attempt!
cheers,
Erik
Beside the remaining very good comments on this topic on the list, the idea of Schools in Testing simply does not care Agile Testers. Here is an excerpt from the discussion I had with James Bach on the topic:
When we speak of schools in my community, we are speaking of paradigms
in the Kuhnian sense of worldviews that cannot be reconciled with each
other.
Basically context-driven thinking helps Agile testers as well, but they don’t adapt a Kuhnian sense of worldviews towards testing. Mainly I am considering whether there is such a thing as a Agile School or not. Bret Pettichord felt like there was, but currently I am not convinced about it. I am glad that I learned a big lesson from great contributors on my renaming approach, and I finally ditch this attempt.