Wednesday, April 15, 2009

Improving Requirements with Lean Six Sigma Tools

Lean Six Sigma (LSS), more than anything else, is about Managing by Fact. Every organization can select elements of the LSS approach without necessarily taking on a full implementation. This post considers one common scenario in software implementations and describes how selected elements from LSS can be adapted to improve outcomes.


Scenario:


“We do an enhancement prioritization process with our customers during our annual planning cycle, but somehow it just doesn’t seem to work very well. We end up with a bunch of stuff that doesn't seem to have any sort of overall theme – almost features without a rationale. We need another way to work this!”

Anchoring Requirements in Business Outcomes


One of the common misconceptions concerning Lean Six Sigma (LSS) is that it’s all about statistics – in reality it’s much more than that. One of the tools (which can be applied independent of the method in total) involves disciplined use of language data as well as numbers. Requirements, after all, mostly get described in language, and typically not very precise language at that. We often, as in this case example, end up with a laundry list of features and functions whose coherence and central themes are often very unclear, even to those who may have given us the requirements. We've all had the experience of getting half way through a project and realizing that both the development team and the customer are wondering "now why is it we're doing this particular feature???"

When financially oriented fact-based thought process is applied to requirements the focus is often quite different than the typical “what are your requirements?” approach. Instead the focus is on understanding the customer's "Critical to Quality" (CTQ) business objectives – developing a rich understand of what the customer is trying to accomplish and how value will actually be generated. At first glance this may seem a fine distinction, but in practice it leads to a very different mindset that creates very different outcomes. Implications of this different mindset include:


Desired business outcomes precede features and functions.
A fact-based approach will focus first on the business results, described in financial terms that are the reason a system is being developed or enhanced. Certainly most projects begin with some sort of high-level statement of business objectives that justify initiation of the project, but that focus is often lost when the team starts to “find out what they want” – by the time the project is a couple of months old few remember the initial rationale. Most projects quickly lose sight of the “why are we doing this?” and "how does this feature/function contribute to realization of the expected business value?" point of view – losing the connection between “what they asked for” and how satisfying those wishes will produce business value.


An accounts receivable system, to offer a simple example, fundamentally has only one reason to exist – i.e., to facilitate collection of money owed the organization. Even in such a simple example systems are very often built that have dozens of different ways to enter transactions or view amounts outstanding, reflecting the individual preferences of collections and accounting personnel in the various divisions and regions of the organization. Perhaps many of these units were acquired over a period of time. Perhaps they all used different systems and different business processes – some used Oracle, some SAP, some had QuickBooks. They all want to have their reports and screens exactly the way they are accustomed to seeing them – and as a consequence the implementation team builds far more software than is fundamentally necessary, creates many versions of the training, provides help desk support for all the variants. A very large part of this extra effort has essentially no actual business value.

Impact based selection of functionality to be delivered. Instead of "popularity contest" that relies on some sort of voting scheme and/or on the political or financial clout of certain stakeholders, a more fact-based approach will produce better results. A scorecard based on an adaptation of the Pugh method appropriate to the circumstances can provide a formal mechanism that facilitates objective evaluation. Proposed features and functions can be rated against an agreed set of CTQ attributes that reflect not only the business outcomes but also important non-functional attributes of a solution that meets all "well-founded" customer requirements (as distinguished from wishes and matters of taste or style). Attributes that may be rated for each proposed feature/function might include some of the following:
  • The contribution it makes to financially measured outcomes – if we add this feature will our collections improve?
  • The contribution it makes to the cost of operating the system – if we add this feature, will it reduce our operating cost or cost of ownership?
  • The contribution it makes the efficiency of the personnel using the system – if we add this feature will it reduce the time it takes to enter transactions?
  • The time it takes to perform a collection activity?
  • The contribution it makes to deployment of the system – if we add this feature, will it reduce training time? Reduce development of training materials?
  • The contribution it makes to system reliability – does it make the system more foolproof? Is the cost of the feature consistent with the associated failure risk?
  • The contribution it makes to security – does the feature make the system less vulnerable? Is the cost of the feature consistent with the associated risk?
  • What portion of system users will use the feature – is the user base impacted consistent with the development costs?

Questions such as these (and certainly there could be many others relevant to a particular situation) can be an effective screen that prevents gumming up the works with a lot of low-value stuff. When ratings have been assigned to proposed features and costs have been estimated for each it is a relatively simple matter to use the resulting scores as the basis for decisions on what to include in light of the available budget. A software firm I know applied this approach to a major new release – when they presented their approach and results they received a standing ovation from the user group members who participated in the identification of potential features/functions and in the ratings process. They saw a significant increase in upgrade revenues, and for the first time in years the internal friction between development and marketing was reduced to a low boil.

Is this rocket science? Of course not! Did we need advanced statistics? No way! What was needed, and what some of the LSS tools supplied, was a disciplined, fact-based process that was visible, understandable, and defensible. Certainly there was room for argument on the ratings, and many arguments occurred, but in the end, everyone involved understood how and why the decisions were reached. Internal and external alignment was better than it had been in years.

1 comment:

  1. Six Sigma Certification is another certification that, bit by bit, rises to fame.
    This course is very important since business is what runs within an organization.
    Thanks a lot for sharing!

    ReplyDelete