We live in a world that is driven by a Need for Speed and when leaders mention agility, continuous integration and continuous deployment, often speed is mentioned as the key driver and objective. Having worked with dozens of companies, my learning is that companies go through a number of evolution steps in order to work with speed.
In the first stage, companies use the notion of speed as a reference to developing as many features per time unit (also referred to as flow). The belief system of the company at this point is that more features result in more value and that an organization that builds twice as many features also delivers twice as much value and consequently putting pressure on agile teams to deliver as much functionality for every sprint is used to maximize value.
The challenge with this viewpoint is that it assumes that we are able to effectively predict the needs of customers, build the functionality to match those needs and then live happily ever after. The challenge is that research by us and others shows that, likely, between half and two-thirds of the features in a typical system are never used or used so seldom that the R&D investment in the feature is not justified.
Once companies realize the aforementioned challenge, they move towards the second stage. In this stage, features are still prioritized in a qualitative, opinion-driven process, but the feature itself is treated in a quantitative, data-driven fashion. This means that the intended effect of the feature is described in measurable terms and the effect of the feature, after development and deployment, is measured. This builds a feedback loop between prediction and measured effect that allows companies to take more informed decisions going forward. Of course, this also allows companies to develop features in an iterative fashion, rather than building the feature top to bottom and including all the exceptional use cases.
The challenge with the second stage is that although the company is more effective in building the selected features (as it now measured whether the intended effect is achieved), it does not know if the set of features selected for development indeed is the optimal set as decision making concerning feature selection is still based on qualitative and opinion-based decision making. Once several prioritized features fail to deliver on expectations the company moves to the third stage.
The third stage is concerned with maximizing the effectiveness of R&D. We define R&D effectiveness as generating as much value as possible for every unit of R&D resource invested. For instance, for a company where the R&D budget as a percentage of revenue is 10%, this means that every euro or SEK invested in R&D has to result in 10 euro or SEK of (captured) business value just for the company break even. Once the company understands this math, the focus shifts on selecting the features that will support the creation of business value to the maximum extent. This requires a hierarchical value model where top-level business KPIs are connected to feature level metrics and where these metrics are proven leading indicators for the business KPIs. Once the company reaches this third level, there is a continuous alignment of high-level business KPIs with lower level metrics and the relationship between these is modeled, validated and evolved continuously.
One key enabler for this way of working is fast feedback loops, meaning driving towards the shortest possible delay between decision making and collection, analysis and reporting of outcome data. In the data-driven development adoption process that I have been covering in the last weeks, the fourth step in the process is exactly concerned with shortening the length of feedback loops. As shown in the figure below, once you start to model feature value in quantitative terms, the next step has to be starting to measure these metrics. The shorter the delay between developing the next increment of a feature and measuring its effect, the more effective the company can be.
Figure: Summary of the adoption process
Concluding, although most will refer to the Need for Speed as the critical factor for success, in practice it is developing the shortest possible feedback loops between development and the field. Shorter feedback loops allow for faster, more accurate, data-driven decision making. In that sense, it’s not about speed, but about fast feedback loops. In practice, the notion of speed is an important enabler for fast feedback loops. We should, however, not forget that speed is an enabler rather than a goal; a means to an end.