“What tool should we use?”
How many times have we hear those words? When a group of developers is presented with that question most will scurry off and start searching the web for information. A chunk of those developers will eventually enter a spin cycle and will never report back a result. The rest of them will come back with an answer and say “Here’s what we should use and why.”
Move forward about a year.
- The project was successful. Everyone’s happy. The team moves on to the next project.
- The project was unsuccessful (regardless of whether or not it was the “fault” of the tool or not).
- Some key developers leave the project. New developers are brought on who weren’t part of the original decision process.
Now the questions are: were those original “why’s” valid to the problem at hand and how do they affect projects moving forward?
In most environments the answers are easy: “we don’t know”. Most teams don’t record the decisions made and they don’t have a way to measure success at any level (save, perhaps, for the project itself).
Enter the rubric.
When the “What tool should we use?” question is first asked do this: spend an hour or so with the group and define the major business and technical requirements as they are viewed at the time and write up a rough rubric — a set of criteria and a scale for determining relative importance of the criteria. Agile teams should be able to plow through this requirements definition by using one of the standard planning methods.
The rubric is used to ensure that each tool is evaluated in a consistent manner. It ensures that the group is all pointed in the same direction. You may have to change your rubric as you go along to meet new concerns. (Make sure that all tool candidates are rescored against the new rubric.) Once a decision is made you can be assured that the reasons for choosing that tool are known and understood.
Will a rubric ensure that you never make a bad decision? Certainly not. So why do it? Let’s examine a case where a rubric is not used:
A tool is chosen by standard means (i.e. someone just picked it). The project fails. During the lessons learned meeting the following is heard:
Manager: "Why did we choose this tool?" Developer: "I don't know. Joe chose it." Manager: "Where's Joe?" Developer: "He left the project a few months back."
In many environments the blame will be placed solely on Joe. Without some way to compare against what was orginally expected of the tool, how can the success of the tool be rated? More importantly, how can you be sure that the same mistakes wont be made on the next project?
Developer: "We're not going to use that same tool that we used on the previous project!" Manager: "Why?" Developer: "Don't you remember? That project crashed and burned!" Manager: "You're right!"
It’s easy to confuse the cause-and-effect reasons for the failure of a project. It may be that the tool was the reason for the project’s failure or it may be that the tool was the reason for the project’s success. But the root cause of the project’s success or failure may be unknown since the tools were chosen in an ad hoc manner.
Had a rubric been used in the above case, the team would be able to go back and look at the requirements for the tool and how those requirements were weighted. Let’s say that the project failed because the tool had a number of critical bugs that were not patched in a timely manner. Looking back at the rubric one could check to see how the product was rated in the categories of “time between releases”, “relative stability and maturity”, “number of outstanding bugs”, etc. If none of those qualities were investigated then for the next project the team knows that they need to look at those and rate them appropriately.
Through the proper use of a rubric, the consistency and quality of projects can be systematically improved.