One of my main take aways from day 1 of was the importance of using metrics to monitor what our applications are doing, how they are running, and whether they are currently in some problematic or broken state.
Once we have good metrics and a good set of monitoring systems on top of them, we can be much more aggressive in pushing out changes due to the fact that this style of monitoring gives us a very effective early warning system with regards to bugs or breakages that have been introduced.
The problem is that internal application metrics can sometimes be hard to capture and slurp out into monitoring or graphing software.
Scraping events out of log files or polling them out of a database is sub-optimal and even coding them into the application more explicitly can be difficult if they were not accounted for during the early development phases.
A few speakers today made the point that generation of metrics should be included up front as part of development of individual features. They could in fact be included in the ‘definition of done’ such that each feature is not considered complete unless appropriate metrics and appropriate monitoring is put into the place off the back of them. By encouraging or enforcing metrics and monitoring in this way, we are likely to end up with a very rich insight into the application over time – at least if are able to handle and dig into the data.
Etsy are often held up as an example of a team making heavy use of Metrics in this style. Indeed Mike states in this presentation that ‘Metrics are a part of every feature’:
Metrics Driven Development?
Today mentioned the idea of taking this idea further, and using application metrics to really drive the features that we implement and how we implement them. Even though it sounds a little frivolous at first, I actually think there is a fair amount of potential in the idea.
For instance, imagine the scenario where we’ve been asked to add a new field to a form, perhaps a postcode on the sign up form of some web application. This sounds like an innocuous requirement that we would usually just add and not think twice about. However, think of all of the metrics that we could capture before and after the change if we were to be aggressive with regards to application metrics and monitoring:
- Form completion attempts % without new field
- Form completion attempts % with new field
- Form completion success % without new field
- Form completion success % with new field
- % time the new field is ommitted
- % time the new field is entered but is invalid
- % time the new field is entered succesfuly
Metrics like this would give us great insight into whether the change was succesful, whether it led to breakage how it has changed user behaviour, and whether it is delivering net value to the business in terms of the A/B test. The bottom few in the list could be used to track and improve it over time, considering there is some benefit to the business in terms of capturing the new data.
We could also make our metrics more granular so that we can pivot by, say, geography or mobile device to identify any problems in relation to those users. [This would create huge volumes of data for an absolutely trivial field, but I could nonetheless see net benefit whilst the feature is being rolled out.]
The point is that even in this trivial example, working in this style does actually sound like a very appealing and value driven way to deliver software:
- Our implementation decisions should be data driven. We should take the smallest possible step and measure the impact in order to maximise value delivery;
- We should only be implementing features if they deliver a net benefit. If we can’t measure some feature, how can we be sure it did provide a net benefit? Perhaps we should deliver something of more measurable value initially?;
- If we can’t monitor some feature, how can we be sure it can reliably and consistently be deployed? Not being able to monitor and measure absolutely reduces the desire to deliver the feature. Perhaps we should deliver something we can more securely delivery first?
My point is this. In a data driven organisation, metrics and monitoring do at least deserve to be bought forward in the process as a factor that heavily influences the development decisions we make. A new X-driven-development has been born!