24 October 2015

A Thin Slice of Value

The very first principle of the Agile Manifesto reads:

“Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software.”

I’m reasonably sure that most people  (business, technical and customers would agree that as a principle this is sound and that we should indeed seek ways to identify and deliver value as early as possible in any development (software or other) initiative. The principle however, only gets us so far, it makes what we want to do clear enough but doesn’t help much with how to actually do it.

This post addresses the how by providing a step-by-step guide to one approach to finding the earliest and best possible slice of deliverable value. The approach is called Blitz Planning and is introduced and covered in detail in Alistair Cockburn’s Crystal Clear [1].  Blitz Planning is applicable in a wide range of contexts, from software development through to business process change. 

Re-Introducing Blitz Planning – Setting the scene

Firstly it’s important to understand that Blitz planning is not about generating user stories or identifying epics and features; although it is likely to provide an input to some of these things if used in this context. The output of a Blitz planning activity can inform your release plan however, the key objective of blitz planning is to find the earliest possible point at which business value (revenue or savings) can be delivered.

I like to run this activity as part of an inception, which is the multi-day planning workshop used to kick off a new or re-imagined project, initiative or other piece of work. You can read more about inceptions here and here.  It makes most sense to conduct your Blitz Planning exercise before you start to identify epics or features and while you are still working at a fairly high level. 

It’s critical to have all the right people in the room. This means the representatives of all the stakeholder groups who will be impacted by, or who will have an interest in your project.  This should include; technical people, business people (including your executive sponsor), process people and if possible your customer (the end user or your product) or their (knowledgeable) representative.

To conduct the exercise you need index cards, stickies, builders tape, sharpies and a large table or floor space.  It’s important to be able to easily move the cards around.

Getting Started

Step 1; what are all the things we need to do to deliver this project?

Ask your participants to form affinity groups, by this I mean groups of shared interest, for example you might have, a technical group, made up of developers, QA’s, BA’s and infrastructure/operations folk. Another group may be made up of business representatives, marketing team members and perhaps your project sponsor.  If you have end user/customer representatives in the room, they might form a team with training and communications experts.  

Ask each team to brainstorm and then write cards for every task which needs to be done, you should include all kinds of tasks, but not processes; for example, migrate database or order hardware, are tasks, whereas processes such as run a showcase, or run an elaboration workshop are not. Don’t worry too much about granularity at this point; you will get a chance to review that shortly.  

Step 2; where are the dependencies?

Ask each team to group their cards by theme and lay them out in vertical columns on the table or floor, the tasks should be ordered by dependency in each column, any task which is independent or can be done in parallel with another should be in it’s own column.  You will end up with something that looks like this.

At this point, you will likely find some duplicates between the teams; these cards can be stacked on top of one another.  You may also identify some tasks that are too large and have multiple dependent parts, break these out until you have only single columns of discreet but related tasks.

Ask each team to review the other teams columns of tasks looking for duplicates, and omissions, merging and refining columns as they work.

Step 3; Creating a big picture

Now bring together all the columns of tasks from the teams, placing them on the table, but leaving a decent amount of space above your cards when viewed from the bottom to top.  Align all the columns horizontally in order of any obvious dependencies, for example you may want to take into account any long lead times, such as regulatory approvals or print material runs. Don’t worry too much about these though; they are not critical at this point.

You will now have something that looks a bit like a walking skeleton or story map but is not made up of stories but of all the tasks that need to be done for the project.

Step 4; Estimate and Tag

In this step ask participants with relevant knowledge to tag each card with a high level estimate of how long they think the task will take to complete. These estimates should be given in days, are intended to be very rough and should be based on total elapsed time.  These estimates are designed to give the team a general feel for the size of the project, nothing more, so don’t get too prescriptive, err on the side of generosity.  Where there are a number of wildly differing estimates on a task, take the opportunity to explore the differences and come up with an agreed best guess.

Tag each card with an indicator of who or which team needs to do it.  If a particular person with specific expertise is required mark this on the card too.  The objective of this step is to identify key constraints such as a significant amount of work dependent on one individual or team. This should lead to discussion on how this might be avoided and presents an opportunity to think about how you might apply techniques such as the 5 focussing steps from the Theory of Constraints [2], [3]

Step 5;  Review dependencies

Now bring your teams together and take some time to review your cards again as a group. Particular cards may trigger ideas for more cards.  Question all dependencies, you may find that some things that have been identified as dependencies could in fact be done in parallel. You should also identify and tag any very strict dependencies that exist. An example of a strict dependency might be that you need to develop training materials before you can deliver them.

Step 6 Make magic happen

Now grab your builder’s tape and create a horizontal line above your top row of cards but leaving space at the top of the table. Look at your columns of cards and ask, what tasks do we absolutely need to do to prove this concept.  You are looking for the thinnest possible end-to-end slice of functionality.  Move these cards above your line to the top of the table.  Look for opportunities to make this as fast and a lightweight as possible.  Consider options such as not using the end state architecture. Perhaps you decide that you can use a flat file to hold data rather than a database. Make sure you have included any tasks which are not part of the development process, but which need to be done first, such as obtaining software licences, or spiking out a technical challenge.  This first cut may not be useable in a production sense, but should prove your concept. This is particularly valuable where your project requires you to integrate several systems. 

Place a second line of tape beneath these cards; you should have a gap between the line of tape and the top of your remaining columns of cards. Move cards to make this space if you need to. Now place the cards that are absolutely needed to create a product that someone can use below this second line of tape.  This is your MVP; its purpose is to allow you to get early feedback on your product and is an important potential pivot point for the business.

Place a third line of tape beneath your MVP and add the cards that would be needed to create a product that generates the earliest possible business value. This may take the form of earned revenue or savings. This step may trigger discussions about how business value can be measured and may also result in the generation of more cards around identifying appropriate metrics. If this is the case, estimate and tag these cards and place them appropriately.

Now add up the estimates in each column for each of your releases. Then add these horizontally. 

At this point, both the team and your sponsor have enormous early visibility over the project, its size scope and complexity. This can lead to a number of important discussions about how valuable a project actually is and how much needs to be included to deliver sufficient value.  The team and the sponsor may be either pleasantly or unpleasantly surprised by results but everyone will probably be appreciative of the level of transparency and the shared understanding they now have.

Step 7 Identify further releases and mitigate risks

Assuming that your project sponsor has not decided that the whole project is now a bad idea, you can go on to identify further releases. These can be represented by clusters of functionality that add additional business value, so you may decide to move these up to an earlier release, or you may decide to delay a cluster which does not appear to add sufficient value.  You may also spot cards that carry a large amount of risk late in the project, especially if they are tasks that could be very expensive if they go wrong. You can discuss how to mitigate these risks by looking for opportunities to start these as early as possible. 


Having used this technique several times now I have found it to be very worthwhile from a number of different perspectives.

·      It provides you with the high-level planning equivalent of a user story card. (In the same way that a story card is not a requirement but a placeholder for a conversation, the potential releases you have identified are placeholders for further conversation.) You can take each one of these forwards to use as a starting point for more detailed release planning activities. 

·      Both business and technical stakeholders have a shared understanding and level of visibility over the project. This leads to more valuable conversations and the minimisation of the tensions between technology and business that can sometimes emerge, even in the most enlightened organisations. 

·      The approach forces everyone to focus on delivering the most valuable software for the customer as early as possible and on measuring that value.

This post has been my interpretation of how to conduct and use this technique, if you would like to read a more detailed description of its origins and use you should buy and read Crystal Clear or attend an Advanced Agile Master Class where it is taught by it’s inventor, Alistair Cockburn. 

[1]      A. Cockburn, Crystal Clear: A Human-Powered Methodology for Small Teams. Addison-Wesley, 2008.
[2]      “Theory of Constraints.” [Online]. Available: http://www.leanproduction.com/theory-of-constraints.html. [Accessed: 24-Oct-2015].
[3]      M. Naor, E. S. Bernardes, and A. Coman, “Theory of constraints: is it a theory and a good one?,” Int. J. Prod. Res., vol. 51, no. 2, pp. 542–554, Jan. 2013.

11 August 2015

Frameworks for Scaling Agility; A Cautionary Tale

Note:  This post was originally published on LinkedIn

In his seminal paper entitled “Agility from first principles: Reconstructing the concept of Agility in ISD” [1] Kieran Conboy offers the following as a definition of Agility;
“…the continual readiness of an ISD method to rapidly or inherently create change, proactively or reactively embrace change, and learn from change which contributing to perceived customer value (economy, quality and simplicity) through its collective components and relationships with its environment.”
Conboy’s definition was created in the context of an evaluation of the efficacy of specific software development practices; it is based on a careful analysis of the terms Agile and Lean and is grounded by research into the history, principles and practices associated with them over the last several decades. Conboy carefully documents the process through which the definition is arrived at, such that it is repeatable independent and trustworthy.
If you replace the words ISD method, with organisation and scale it to an organisational level it aligns quite nicely with the idea of responding to digital disruption a term used to describe
“…changes enabled by digital technologies that occur at a pace and magnitude that disrupt established ways of value creation, social interactions, doing business and more generally our thinking.” [2]
In other words, in order to respond to digital disruption organisations perceive that they need Agility.
Given the urgent calls for businesses to embrace the concept of digital disruption from organisations such as the Reserve Bank of Australia [3] amongst others; as well as the somewhat dismal figures for the success of technology projects in general [4] it should not be at all surprising that Agile and Lean approaches to doing business are currently of great interest to organisations working with digital technology. This means that more and more businesses are asking how they can use Agility to leverage the perceived benefits of digital disruption within their organisations.
However, organisations which have attempted to introduce or to scale Agile, have learned that the implications of adopting Agility are far broader than just replicating the approaches and activities that have been successful for individual teams. Adopting Agility at an enterprise level implies a significant cultural and organisational change. This kind of change impacts roles and responsibilities, corporate governance mechanisms, reporting mechanisms, approaches to corporate and financial planning, marketing, sales forecasting and public relations; as well as demanding new and different conversations with stakeholders, shareholders and the user community.
More than a decade of research into Agility has resulted in a substantial body of literature highlighting these challenges as well as the practical difficulties of scaling Agile in-the-large, which include; managing variability amongst team processes, lifecycles and approaches to developing and managing requirements [5], [6], [7], [8], [9], [10], [11]. Whilst a number of high profile digital organisations have successfully adopted Agile as whole of business approach [12], [13], these success stories are not the majority. So whilst corporate interest continues to grow, so do concerns about the risks and challenges implicit in attempting to initiate such broad corporate change. Consequentially, organisational governance teams are seeking assurances about the potential costs and benefits of making these kinds of changes [14] .

Enter the Frameworks for Agile in-the-large

In response to these concerns a number of frameworks offering a pathway to Agile-in-the-large have emerged; some examples of which are DaD (Disciplined Agile Delivery), [15], and more recently, LeSS, (Large Scale Scrum) [16], and SAFe (Scaled Agile Framework)[17].
LeSS (Large Scale Scrum) championed by author Craig Larman, [15] proposes adding just enough extra structure to the existing Scrum framework to better support scaling to more than one team. LeSS does not add new concepts, rituals or artefacts to Scrum but proposes an approach to applying existing ideas to larger groups with the primary objective of improving communication. LeSS offers a certification scheme and a range of courses targeting both executives and Scrum practitioners and has recently launched a new website which incorporates a similar big picture concept to that proposed by SaFe [16]. 
DaD (Disciplined Agile Delivery) [17] is proposed by Scott Ambler, and is an evolution of his work on the EUP (Enterprise Unified Process) [18] which in turn extends the RUP (Rational Unified Process), [19]. DaD proposes a hybrid approach to scaling Agile and incorporates strategies drawn from a range of lightweight methods and Lean principles as well as mandating a set of prerequisite core practices. DaD claims to extend Agile principles across the system lifecycle and provides approaches to managing challenges such as geographically diversified delivery teams, complex organisational structures and multiple technology platforms [20].  
SAFe is the most recent of the Agile in-the-large frameworks and is based on the work of Dean Leffingwell [21]. SAFe proposes a three-tiered approach to scaling Agile, which address the needs of the team, the program and the portfolio. SAFe is the first Agile in-the-large framework to offer a complete package of supporting software, certification schemes, books and training courses driven by an intensive marketing effort [22]. SAFe claims to provide a proven approach to scaling Agile, a claim supported by a number of customer case studies. Software vendors have developed tools designed to support SAFe through both their product and through training and certification.
SAFe has been the topic of some particularly heated discussions found in the technical literature. These highlight strong concerns voiced by many well-respected authors, practitioners and thought leaders. Who, whilst acknowledging that one of the strengths of SAFe is its basis in Lean principles have raised concerns about whether the framework actually supports the principles and values which underpin Agile, or whether it undermines them [23]. A focus on standardisation, large scale planning and taking a top-down approach are also highlighted as potential problems, primarily due to an apparent focus on process rather than on people [24]. SAFe claims to present approach to scaling Scrum, yet Ken Schwaber one of the designers of Scrum argues that SAFe is based on RUP rather than Scrum [25]. Ron Jeffries one of the original designers of XP also highlights the centralised approach to planning dictated by SAFe as an issue [26].
Both DaD and SAFe add new rituals, practices and other modifications to existing Agile methods, both propose significant organisational change and both claim to address corporate governance issues; DaD through blending traditional and Agile thinking and SAFe though offering transformational patterns as a bridge from traditional to Lean-Agile approaches. All the frameworks discussed in this post are associated with large amounts of expensive supporting material, such as, training courses, certification schemes, supporting software products and of course highlight the need for specialist consulting as a pathway to success.
These frameworks are primarily designed to assuage the concerns of organisational governance teams who are keen to leverage the perceived benefits of Lean and Agile approaches, whilst mitigating a myriad of well-documented inherent risks. Whilst all these frameworks emphasise  adaption  to meet the needs of implementing organisations, they are presented as highly prescriptive pathways to success and unlike Agile implementations at the team level there is very little empirical evidence in the substantive literature to support claims that these approaches are proven or that they represent a good investment of potentially very large amounts of money. More conceptual approaches such as Lean governance and those associated with Agile in-the-small, (meaning the use of specific practices such as XP, pair programming, BDD and TDD) are well researched and there exists a great deal of empirical evidence to support the idea that in various circumstances, these practices can deliver benefits; but on their own they are not sufficient. 

The way forward

So what is the way forward? How can organisations take advantage of Agility to help them respond to new and rapidly evolving demands and opportunities, whilst continuing to minimise risk, and meet their obligations to stakeholders? The research on organisations which have implemented Agile-in the large still presents us with more questions than answers [27] ; however we can identify some of the organisational characteristics which seem to influence perceptions of success.
Key amongst these is the need for a shared understanding of what Agility actually means along with clarity of purpose from an organisational governance perspective [7]. For example if your goals are around increasing speed to market or making improvements to product quality within an existing culture and governance framework, then a structured approach or framework will probably be appealing. Especially if that framework also suggests that you can do these thing more efficiently (cheaply) by adopting it. But this is not Agility; this is procedural change and much of the existing research suggests that organisations that confuse Agility and process improvement will not be satisfied with the outcomes nor will they achieve the certainty they desire.  If, on the other hand, your goal is to deliver value to your customer through understanding and responding to their needs you are not only starting from a different place but will need to take a very different approach. Structuring your organisation around this goal can require a seismic shift to an organisations culture and management style. Wanting to be better able to respond to an uncertain and disrupted marketplace demands a level of tolerance for uncertainty that many organisations struggle with.
In addition to a shared vision and clarity of purpose, the characteristics shared by organisations that perceive themselves to have been successful in this regard are:
  • An understanding that the biggest challenges are likely to be around people, perceptions and concepts rather than technical [5]
  • A willingness to iteratively experiment and to learn
  • A culture that engages and empowers both employees and customers to be part of the experiment and learn cycle
  • Transparency in everything, including things which don’t work
  • Patience; being willing to start small, measure and build on small improvements in customer value, and employee satisfaction


Enterprise level Agile is not well researched or understood and whilst many of the challenges associated with scaling Agile have been identified, approaches to solving them are emergent and potentially immature. Whilst frameworks such as DaD and SAFe which propose step-by-step approaches to implementation may seem appealing and present attractive testimonials and case studies to support their claims, these claims are not based on empirical research and in many cases are at odds with the empirical evidence which is available.  


[1]       K. Conboy, “Agility from First Principles: Reconstructing the Concept of Agility in Information Systems Development,” Inf. Syst. Res., vol. 20, no. 3, pp. 329–354,478, 2009.
[2]       K. J. Riemer B., “Digital Disruption,” Backed By Research, 2014. [Online]. Available: https://byresearch.wordpress.com/2013/03/07/digital-disruption/.
[3]       S. Girn, “Digital Disruption – Opportunities for Innovation and Growth.” Reserve Bank of Australia, 2014.
[4]       P. Adamczyk and M. Hafiz, “The Tower of Babel did not fail,” ACM SIGPLAN Notices, vol. 45, no. 10. ACM, Reno/Tahoe, Nevada, USA, p. 947, 2010.
[5]       B. Boehm and R. Turner, “Management challenges to implementing agile processes in traditional development organizations,” Software, IEEE, vol. 22, no. 5, pp. 30–39, 2005.
[6]       S. C. Misra, U. Kumar, V. Kumar, and G. Grant, “The Organizational Changes Required and the Challenges Involved in Adopting Agile Methodologies in Traditional Software Development Organizations,” in Digital Information Management, 2006 1st International Conference on, 2007, pp. 25–28.
[7]       A. Mahanti, “Challenges in Enterprise Adoption of Agile Methods -- A Survey,” J. Comput. Inf. Technol., vol. 14, no. 3, pp. 197–206, 2006.
[8]       G. Van Waardenburg and H. Van Vliet, “When agile meets the enterprise,” Inf. Softw. Technol., vol. 55, no. 12, pp. 2154–2171, 2013.
[9]       C. Rand and B. Eckfeldt, “Aligning strategic planning with agile development: Extending agile thinking to business improvement,” in Proceedings of the Agile Development Conference, ADC 2004, 2004, pp. 78–82.
[10]     K. Logue and K. McDaid, “Agile Release Planning: Dealing with Uncertainty in Development Time and Business Value,” in Engineering of Computer Based Systems, 2008. ECBS 2008. 15th Annual IEEE International Conference and Workshop on the, 2008, pp. 437–442.
[11]     V. Heikkilä, K. Rautiainen, and S. Jansen, “A revelatory case study on scaling agile release planning,” in Proceedings - 36th EUROMICRO Conference on Software Engineering and Advanced Applications, SEAA 2010, 2010, pp. 289–296.
[12]     K. Power, “The Agile Office: Experience Report from Cisco’s Unified Communications Business Unit,” in Agile Conference (AGILE), 2011, 2011, pp. 201–208.
[13]     P. Saddington, “Scaling agile product ownership through team alignment and optimization: A story of epic proportions,” in Proceedings - 2012 Agile Conference, Agile 2012, 2012, pp. 123–130.
[14]     J. Pernstål, R. Feldt, and T. Gorschek, “The lean gap: A review of lean approaches to large-scale software systems development,” J. Syst. Softw., vol. 86, no. 11, pp. 2797–2821, 2013.
[15]     S. Ambler, “Disciplined Agile Delivery,” 2014. [Online]. Available: http://disciplinedagiledelivery.com/.
[16]     “Large Scale Scrum (LeSS),” 2014. [Online]. Available: http://less.works/less.
[17]     “Scaled Agile Framework,” 2014. [Online]. Available: http://scaledagileframework.com/.
[18]     S. Ambler, “Enterprise Unified Process (EUP): Strategies for Enterprise Agile,” 2014. [Online]. Available: http://enterpriseunifiedprocess.com/.
[19]     “IBM Rational Unified Process (RUP),” 2014. [Online]. Available: http://www-01.ibm.com/software/rational/rup/.
[20]     A. W. Brown, S. Ambler, and W. Royce, “Agility at scale: economic governance, measured improvement, and disciplined delivery,” in Proceedings of the 2013 International Conference on Software Engineering, 2013, pp. 873–881.
[21]     D. Leffingwell, “Dean Leffingwell,” 2014. [Online]. Available: http://deanleffingwell.com/.
[22]     P. Saddington, “The Scaled Agile Framework (SAFe) - A Review | Agile ScoutAgile Scout,” 2014. [Online]. Available: http://agilescout.com/scaled-agile-framework-safe-review/.
[23]     N. Killick, “The Horror Of The Scaled Agile Framework | neilkillick.com,” 2012. [Online]. Available: http://neilkillick.com/2012/03/21/the-horror-of-the-scaled-agile-framework/.
[24]     A. Elssamadisy, “Has SAFe Cracked the Large Agile Adoption Nut?,” InfoQ, 2014. [Online]. Available: http://www.infoq.com/news/2013/08/safe#.
[25]     K. Schwaber, “unSAFe at any speed,” Telling it Lite it is, 2014. [Online]. Available: http://kenschwaber.wordpress.com/2013/08/06/unsafe-at-any-speed/.
[26]     R. Jeffries, “SAFe – Good But Not Good Enough | xProgramming.com,” 2014. [Online]. Available: http://xprogramming.com/articles/safe-good-but-not-good-enough/.
[27]     D. Torgeir, B. M. Nils, T. Dingsoyr, N. B. Moe, T. Dingsøyr, and N. B. Moe, “Research challenges in large-scale agile software development,” SIGSOFT Softw. Eng. Notes, vol. 38, no. 5, pp. 38–39, 2013.