Stuffy Stand-ups

Here are MyCurrentEmployerCo, we have many problems with our adoption of Agile. (Let’s face it, if they didn’t they’d not have hired me, so I shouldn’t complain too much…)

Amongst our many problems, there’s one that I’m finding particularly frustrating right now, so it’s time to blog about it. At the very least, the act of describing it ought to serve as some form of therapy for me, regardless of whether or not any constructive suggestions appear in the comments below.

At the majority of our stand-ups, several of the team will have a conversation along the following lines:

Team member: “Yesterday I worked on stuff. Today, I’m continuing to work on stuff. No blockers.”

Scrum master: “Which of your tasks have you completed.”

TM: “Hmm, none of them are finished, they’re all interdependent.”

SM: “That means we have no way to tell if you’re going to finish your committed work by the end of the sprint.”

TM: “That’s accurate, yes.”

[Fade to black]

[Sound effect: Chainsaw starting up]

The fading to black and the chainsaw might only happen in my imagination, but they definitely help.

There are a number of cardinal sins here, including:

The update doesn’t tell anyone what’s actually going on.

This could well be because the team member concerned doesn’t actually know what needs to be done, but doesn’t want to admit it. It could also be deliberate evasion to cover up some recent (or planned) slacking off. Without the peer-review of the details that the daily stand-up brings with it, no Scrum Master (or other stakeholder) stands a chance of understanding the true state of progress.

Tasks have not been broken down in a way that they can be individually completed. 

The work involved hasn’t been thought through sufficiently before the start of the Sprint. Little or no thought has gone into the task breakdown, but something has to be cobbled together because the Scrum Master insists on having task breakdowns by the end of Sprint Planning Day and this list adds up to about the same number of hours as the stated availability for this sprint.

There is a lack of respect for, and sense of membership of, the team.

To behave in this manner is supremely arrogant: “I’m working on Hard Things TM, don’t worry your pretty little heads about the details because I’m so clever and wonderful that it will all be fine. There, there, there…” These team members see no value in sharing what they’re doing. In  some cases, they see sharing the details of their work as a threat: “If I explain the details to these simpletons, they’ll realise that this work could easily be done in <low-cost  location of  your choice> and I’ll be out of a job.”

Each of these issues needs to be addressed, but each stems from underlying issues some of which go back decades.

I love my job.

Game, Set and Match

Fixing Test Track Tennis was done by agreeing a few simple rules with both the development and testing teams. The rules, which I’ll spell out shortly, were only part of the solution though. A much bigger contributor to the death of test-track tennis arose more as an intended side-effect of the rules: in following the rules, the developers and testers spent a lot more time talking to each other about both the testing and development processes, and over a small number of months the atmosphere changed from one of antagonism between test and development to one of collaboration.

The rules were:

  1. There’s no such thing as a “No Repro”.
  2. Once you’ve fixed your bit of code, test to the end of the script.

We removed “No Repro” as a defect resolution option in the system. Any developer that couldn’t quickly reproduce a defect assigned to him was to immediately get off his backside, walk to the tester’s desk and ask the tester to demonstrate the repro case. Occasionally, a tester would be less than spectacular at describing repro cases, so occasionally defects would get referred to me (as head of development) so that I could have a conversation with the head of testing and the relevant tester could get some coaching.

Defects that only occurred intermittently would be flagged as “intermittent” on the system, and both development and testing teams would keep an eye out for other factors that made the defect more or less likely to occur. The defect would sit with the developer until a solid repro case was found, and checked every time nearby code was edited until things became more clear.

Testers would include the details of their current test script in the defect record, so that the fixing developer could test it right to the end. If he found another bug in his own code, obviously he’d fix that too. However, if he found a bug in someone else’s code, he’d give them the defect to fix, and they’d give it back to him to complete the test. The developer that fixed the first bug in the defect OWNED that defect, until the test script ran right to the end without errors.

This scheme took the average time-to-fix for defects down from six releases to one release. Additionally, it reduced the incidence of “Not Fixed” defects from about 35% down to less than 5%. Maybe not as low as we’d have liked, but low enough that I could take a personal interest in each one that came back like that, which then fed into the coaching and personal development delivered to our developers.

Quality went up. Morale went up. Productivity went up.

This was my first “win” as a newbie development manager, which is probably why it’s stuck with me for so long. Feel free to comment on your first win (or loss), or on ways to improve this strategy the next time round.

Test Track Tennis

I worked at a software house for several years. The owner-manager didn’t really believe in testers, so when he was finally persuaded to hire some, he stuck them out of the way in a distant part of the building, a long way away from the developers. His resistance stemmed from his often-stated belief that developers should test their own work, so presumably the idea of having the testers out of sight was intended to have the developers forget they were there and thus be that much more robust in their self-testing.

For context, this was a VB6-based application with an SOA-style set of back-end services written in C++. There were about half a million lines each of VB6 and C++, and while there were only fifteen or so server components, there were over four hundred client-side VB6 binaries, roughly half of which belonged to the application in question while the other half were common across several applications. (At some point, I’ll write about some of the fun and games we had with COM in that environment, but not today…)

We used a system called “Test Track” for defect tracking; other defect tracking systems are available. (I wouldn’t name it, except that it alliterates so nicely in the title.)

Testers, sitting in their testing silo, would raise a bug, including a few steps of their test script to help with reproduction of the issue by a developer. The bug would be assigned to the lead developer, who would farm them out to whomever was responsible for the bit of code that was blowing up. Said developer would look at it, have one half-hearted go at reproducing it and then send it back to the tester marked as “No Repro”.

A week later, the tester would get the next build of the application and check that the bug was ready for closing, only to find that the reproduction steps still produced the error and it would wing its way back to the closing developer for another try. This rally could go on for several release cycles before either the developer or tester would actually talk to each other about the bug. Obviously, once actual communication happened, the developer would understand that there was a bug and promptly fix it, sending the bug back as “fixed” and happy in the knowledge that his work was done.

The tester, upon receiving the fixed version, would diligently run through the steps that were being tested when the original defect was raised. Sadly, in the majority of cases the result was that a new error would be found further down the test script, the bug would be re-opened with new error details and sent back to the developer for further work.

The average fix time for defects was between five and six weekly releases.

Next time, I’ll talk about what we did to fix this mess, but feel free to comment on what you might do in this situation.

Console.WriteLine(“Hello, World!”);

This blog is intended to help me to recapture my sanity during this interim period between having been made redundant and getting a new job.

I intend to use it to capture some of the more interesting ways in which I’ve seen development projects get broken, and sometimes some of the ways in which they’ve been fixed.

I have a background in software development and development management, and while my last job ended up moving me further and further away from the code the process of writing posts for this is intended to help me get back to the stuff I really enjoy: helping development teams deliver their best stuff.

I’m a fan of scrum and Agile in general and a certified scrum master. I’ve done more VB6 (and knew more about the inner workings of COM) than I care to admit these days and have a more than passing knowledge of “old fashioned” C#, though I’ll be the first to admit that MVC3, Linq and some of the other new features in C# 4.0 are entirely new to me.

I’ve done “small software house” and “internal development team in huge enterprise” and am now (#shamelessplug) looking for work in an agile-friendly organisation where the technology is at the heart of what they do, not a back-room activity.

I’m really curious about how people find their way to this blog, so feel free to leave comments and make yourselves at home.