Wednesday, April 15, 2009

Improving Requirements with Lean Six Sigma Tools

Lean Six Sigma (LSS), more than anything else, is about Managing by Fact. Every organization can select elements of the LSS approach without necessarily taking on a full implementation. This post considers one common scenario in software implementations and describes how selected elements from LSS can be adapted to improve outcomes.


Scenario:


“We do an enhancement prioritization process with our customers during our annual planning cycle, but somehow it just doesn’t seem to work very well. We end up with a bunch of stuff that doesn't seem to have any sort of overall theme – almost features without a rationale. We need another way to work this!”

Anchoring Requirements in Business Outcomes


One of the common misconceptions concerning Lean Six Sigma (LSS) is that it’s all about statistics – in reality it’s much more than that. One of the tools (which can be applied independent of the method in total) involves disciplined use of language data as well as numbers. Requirements, after all, mostly get described in language, and typically not very precise language at that. We often, as in this case example, end up with a laundry list of features and functions whose coherence and central themes are often very unclear, even to those who may have given us the requirements. We've all had the experience of getting half way through a project and realizing that both the development team and the customer are wondering "now why is it we're doing this particular feature???"

When financially oriented fact-based thought process is applied to requirements the focus is often quite different than the typical “what are your requirements?” approach. Instead the focus is on understanding the customer's "Critical to Quality" (CTQ) business objectives – developing a rich understand of what the customer is trying to accomplish and how value will actually be generated. At first glance this may seem a fine distinction, but in practice it leads to a very different mindset that creates very different outcomes. Implications of this different mindset include:


Desired business outcomes precede features and functions.
A fact-based approach will focus first on the business results, described in financial terms that are the reason a system is being developed or enhanced. Certainly most projects begin with some sort of high-level statement of business objectives that justify initiation of the project, but that focus is often lost when the team starts to “find out what they want” – by the time the project is a couple of months old few remember the initial rationale. Most projects quickly lose sight of the “why are we doing this?” and "how does this feature/function contribute to realization of the expected business value?" point of view – losing the connection between “what they asked for” and how satisfying those wishes will produce business value.


An accounts receivable system, to offer a simple example, fundamentally has only one reason to exist – i.e., to facilitate collection of money owed the organization. Even in such a simple example systems are very often built that have dozens of different ways to enter transactions or view amounts outstanding, reflecting the individual preferences of collections and accounting personnel in the various divisions and regions of the organization. Perhaps many of these units were acquired over a period of time. Perhaps they all used different systems and different business processes – some used Oracle, some SAP, some had QuickBooks. They all want to have their reports and screens exactly the way they are accustomed to seeing them – and as a consequence the implementation team builds far more software than is fundamentally necessary, creates many versions of the training, provides help desk support for all the variants. A very large part of this extra effort has essentially no actual business value.

Impact based selection of functionality to be delivered. Instead of "popularity contest" that relies on some sort of voting scheme and/or on the political or financial clout of certain stakeholders, a more fact-based approach will produce better results. A scorecard based on an adaptation of the Pugh method appropriate to the circumstances can provide a formal mechanism that facilitates objective evaluation. Proposed features and functions can be rated against an agreed set of CTQ attributes that reflect not only the business outcomes but also important non-functional attributes of a solution that meets all "well-founded" customer requirements (as distinguished from wishes and matters of taste or style). Attributes that may be rated for each proposed feature/function might include some of the following:
  • The contribution it makes to financially measured outcomes – if we add this feature will our collections improve?
  • The contribution it makes to the cost of operating the system – if we add this feature, will it reduce our operating cost or cost of ownership?
  • The contribution it makes the efficiency of the personnel using the system – if we add this feature will it reduce the time it takes to enter transactions?
  • The time it takes to perform a collection activity?
  • The contribution it makes to deployment of the system – if we add this feature, will it reduce training time? Reduce development of training materials?
  • The contribution it makes to system reliability – does it make the system more foolproof? Is the cost of the feature consistent with the associated failure risk?
  • The contribution it makes to security – does the feature make the system less vulnerable? Is the cost of the feature consistent with the associated risk?
  • What portion of system users will use the feature – is the user base impacted consistent with the development costs?

Questions such as these (and certainly there could be many others relevant to a particular situation) can be an effective screen that prevents gumming up the works with a lot of low-value stuff. When ratings have been assigned to proposed features and costs have been estimated for each it is a relatively simple matter to use the resulting scores as the basis for decisions on what to include in light of the available budget. A software firm I know applied this approach to a major new release – when they presented their approach and results they received a standing ovation from the user group members who participated in the identification of potential features/functions and in the ratings process. They saw a significant increase in upgrade revenues, and for the first time in years the internal friction between development and marketing was reduced to a low boil.

Is this rocket science? Of course not! Did we need advanced statistics? No way! What was needed, and what some of the LSS tools supplied, was a disciplined, fact-based process that was visible, understandable, and defensible. Certainly there was room for argument on the ratings, and many arguments occurred, but in the end, everyone involved understood how and why the decisions were reached. Internal and external alignment was better than it had been in years.

Tuesday, April 7, 2009

Is Agile "Fragile"?

While I'm not intending to be unduly controversial (well, maybe a little), I have noticed more and more commentary recently expressing various concerns about a current "hot topic" - Agile methods. One example is a recent article by James Shore, "The Decline and Fall of Agile".

In that article he remarks "It's odd to talk about the decline and fall of the agile movement, especially now that it's so popular, but I actually think the agile movement has been in decline for several years now. I've seen a shift in my business over the last few years. In the beginning, people would call me to help them introduce Agile, and I would sell them a complete package that included agile planning, cross-functional teams, and agile engineering practices. Now many people who call me already have Agile in place (they say), but they're struggling. They're having trouble meeting their iteration commitments, they're experiencing a lot of technical debt, and testing takes too long."

Personally I don't doubt there are many potential benefits of Agile methods, provided they are actually used as intended and are appropriate to the context in which they are applied. Sadly, like many other good ideas, Agile is often more "talk the talk" than "walk the talk". Some of Agile's more rabid advocates seem think its a "universal solvent", which even alchemists and sorcerers don't believe any more - nothing turns lead into gold.

On the other hand, I do have some fundamental concerns about the evident lack of hard facts and data - there seems to be a lot of heat, but not much light. Are Agile methods actually more productive in aggregate across a series of iterations compared to alternative methods? As Shore points out, "technical debt" can easily become a major problem. To some extent short iterations are necessarily risky to architectural soundness. Of course Agilists advocate "refactoring" to remedy that risk, but how often is refactoring actually done? What does it actually cost? After 10 or so iterations is Agile really, in total, more productive than another alternative?

And what about test driven design/development? What does it cost compared to Fagan style inspections? What are the actual defect containment rates? Capers Jones data and other sources clearly show Fagan inspections find 60- 80% of the defects present while testing finds 30-50% (per test type). The facts we do have call into question some of the claims made for TDD.

In fairness, my comments about lack of facts and data are by no means restricted to Agile - they apply to a great many fads du jour. Let's hope one day soon we'll begin to do rigorous data based assessments. Actually, a few of us are working behind the scenes to bring that to fruition - more about that as it develops!

If you have any facts and data (vs. anecdotes) that may shed light on this topic, please share!