As mentioned in a previous post, I am now part of an organisation that is attempting to convert much of its IT development to an “agile” approach, more specifically by adopting the “Scaled Agile Framework” (SAFe®). Although I am not directly involved in this work I still see and hear enough to notice some patterns emerging.
The origins of agile are in development, i.e. where you basically turn requirements into code. This is more or less the same all over, but when you start scaling agile to cover bigger “organisations”, then suddenly there are a couple of different flavours to be considered.
If you develop software as a product to sell to others, either by putting it on the shelf or using a “made-to-order” approach, the objective is either to put the most appealing product on the shelf or most effectively and efficiently meeting the customer requirements. Once the product is done, that effectively concludes the exercise and your agile approach is therefore really “scaled agile product development”.
However, if you are an organisation that develops or implements software for internal use using agile, then you are effectively doing “scaled agile system implementation”, which is a slightly different flavor to the development one. It’s different because you are now responsible as an organisation for realising the business benefits of the solution and not just completing the product (in essence, outcome vs. output responsibility), and that means that new parameters/considerations start to creep in.
- How do we ensure usability of the solution over the entire life cycle?
- How do we cover compliance requirements, not just for the base product but also whatever is required for usage and maintenance of the solution over the entire life cycle?
- How do we cover user stories to deal specifically with upgrades and maintenance activities in our organisation? (and, just as importantly, how do we get those user stories prioritised during development?)
- And last but not least (actually, probably the opposite!) – how do we include organisation change management activities and user acceptance as part of the agile implementation cycle.
That is of course not to say that these points are not important when you develop software for sale, but they form a bigger part of the total cost/total value equation when the scope is a full implementation and usage cycle – and it’s becoming clear to me that much of the complexity in scaling agile are not in the development, but in the parts that follow being agile at scale…
A million and one (and counting…) of these already, but here’s my view. This is based on my experience with the SAP ecosystem, but I am fairly certain that most of it applies to most other ERP- and ERP-like enterprise software packages as well. It is more or less the antithesis to the large, System Integrator (SI)-driven business transformation programs that I have been part of for the last years, but these are hard learnings based on many hours of my life so that must count for something – right?
So, very quickly why doesn’t the “business-as-usual” approach work?**:
- “Waterfall” project models give a very long feedback cycle on the quality of the chosen solutions as the whole project is effectively design-build-test-deploy-evaluate. The long feedback cycle also prevents effective business feedback on the solution because the “real users” don’t see it before it is too late to make significant changes. To quickly and effectively deal with unforeseen technical issues and weak business requirements you should aim for design-build-validate (cyclic) and then test-deploy (linear).
- Functional silo-thinking overlooks end-to-end solutions: In any (complex) business process, end-to-end alignment is king because the hard part is not solving a problem, the hard part is solving one problem without causing five new ones in a different area of your end-to-end process.
- Functional silos impede progress: If the focus from program management in designing/building/testing (and this might be typical for SI-driven programs) is “why do you have so many issues in sales” then the challenge for any functional team will be to reduce the number of issues – not to deal with the issues causing the biggest impact for other areas. This might paradoxically lead to cases where teams prioritise the simple issues to make their stats look better, rather than solving the hard problems that genuinely impede progress and reduce solution quality. Defects and other issues should therefore be rated in order of their importance, but the criticality needs to be two-axis, both business/solution impact and project/progress impact.
- Functional silo thinking prevents focus on real results: The ultimate goal is a successful business transformation – not just a successful golive. So instead of a focus on outputs, the focus must be squarely on achieving the desired outcomes.
**Obviously, it does work in the sense that stuff gets delivered. My claim is just that by following these guidelines you’ll be able to get a) more stuff done that really matters b) more done for less money invested.
Now, in terms of more practical advice, here are a few of the most important points for me:
- After basic solution scoping and initial design of the solution, adopt a semi-agile approach instead of the normal design/build/test/deploy waterfall:
- 1) Configure/develop the basic standard process and test with the business experts to ID gaps in functionality.
- 2) Build the “core integration” (interfaces) and major functional enhancements that significantly impact the process flows. Repeat testing/validation to fix issues and ID additional gaps.
- 3) Once the basic process is stable and working E2E, build on other functional enhancements and “cosmetics” such as outputs, forms etc. that are necessary but has less impact on the E2E process flow and stability of the process (this would include any unavoidable, legally-mandated outputs etc. because they may be important but they do not change the process)
- 4) Feed each sub process into the testing cycle once it is ready (or via a stage-gate procedure to allocate scarce resources between development and testing).
This should mean that the most basic flows and those with the highest business benefits are handed over to testing first and receive the most attention.
- Use a project manager/lead for each E2E process to manage scope, ensure gaps are constantly prioritised and that overall progress is maintained.
- For integration- and acceptance testing with more people/functions involved, a key metric should be rate of flow, which can be helped by “loading the test pipeline from the back”. This means e.g. making OTC-scripts that are simple in sales/logistics and complex in finance and then building up the complexity in sales/logistics gradually. This should ensure that you quickly get testing to a stage where everyone has stuff to work on, instead of having massive complexity in sales blocking logistics and finance from doing anything for a long time.
- Use a small team of business experts and system experts with focus (= 100% assignment to the project) and a clear mandate to make decisions on behalf of the business. Only expand the team if it is absolutely needed to fulfil specific needs in terms of capability or capacity. Work normally expands to fill the available hands anyway and having as few people in the loop as possible reduces complexity and speeds up issue resolution.
- Let the business leads decide whether to add more people to the project or not, not the SI. When what you are selling is people’s time, it’s no wonder that the automatic response to any sign of trouble usually is “we need more people”. Your business leads will normally be able to judge whether there is really a benefit of more people or whether the marginal cost/complexity of enlarging the team outweighs any benefits (because at some point it will…).
- Build the project organisation on a process-based E2E structure and responsibility, e.g. an MTC-team responsible for making the entire market-to-cash process work instead of only the sales part of the process. Use these teams to ID the process variants of the business and prioritise them based on business value (usually share of revenue/profit), frequency of execution and customer impact as part of the initial scoping and design phases.
- Functional alignment in sales, logistics etc. should be used as well, but mainly to prevent inconsistent approaches and conflicts where there are overlaps in configuration and functionality. Each process lead needs to be accountable for this cross-functional alignment activity as well.
- Note that this is a big change in the way of working for many organisations, so if you’re concerned that this is too much to swallow, then start out with a functional organisation during requirements gathering and move into the process organisation towards the start of testing. A project like this will always be a matrix-structure anyway, so the only question is really which part of the matrix structure that leads and that should then change over time as the project progresses.
- When designing and building the solution, don’t aim to solve the business “how” but the business “why”. Differentiate between “business requirements” and “solution requirements”.
A business requirement is a request for a specific output or outcome and it may be grounded in a real need – or it may be grounded or in legacy thinking, misunderstandings in what the system can do, stakeholder conflicts, weak/undocumented business processes and so on.
A solution requirement on the other hand is an approved addition to the solution that has been screened and assesed based on at least three key criteria:
- 1) It adds overall value to the business.
- 2) It passes at least a very simple “value for money” test on business benefit vs. cost to implement.
- 3) It is delivered in the most technically appropriate way, taking into account as many technical and business process considerations as needed (both long and short term).
- Ensure that nothing is accepted as a solution gap unless the business rationale for the gap is captured and the impact is clear enough to be used for prioritisation. The key reason for this is that requirements change and/or become obsolete over time and so the rationale will be key to managing the solution after implementation. The rationale is what tells you whether something needs to be considered when new requirements come along or if it can comfortably be ignored or changed because its out of date.
And last but not least: If anyone comes in claiming that their solution design or approach is “best practice”, kick them out of the room as quickly as possible before they do any real damage… Always ask for rational for choosing something vs. your requirements. You’re not interested in what worked for others, you’re interested in something that will work for you 🙂