Build, measure & learn your way to robust data analysis

How to build, measure and learn your way to more robust data insight

By Brilliant Noise, February 2016. Posts

In 2002, Daniel Kahneman won the Nobel Prize in economics for his research into the psychology of decision making. Ten years on, Nate Silver used data science techniques to predict the outcome of the US presidential election in all 50 states, gaining both himself and statistics-based forecasting international recognition.

Each of these disciplines has different languages, tools and approaches. However — as James Guszcza (US Chief Data Scientist for Deloitte Consulting LLP) has pointed out, understanding, predicting and influencing behaviour sits at the heart of both.

Guszcza writes about the areas in which behavioural economics and data science overlap and complement each other to create meaningful results. He points out that insights gleaned from data science are usually left to topic area specialists to understand and implement. In addition, behavioural ‘nudges’ are “often one-size-fits-all affairs applied to entire populations rather than analytically identified sub-segments”.

This is true of companies and agencies working in digital product, content and service development. When data, UX, design and strategy all work in isolation the true value of these disciplines can be lost.

Having a single approach to digital research that informs user testing, product testing and customer research is essential. This is an area where combining data science with behavioural economics can create deep insight. But first we need a framework to bring the two together.

The Lean Loop

In the last five years, lean start-up thinking has disrupted the way many businesses now operate, organise and plan. One of the main tenets of this approach is the ‘build-measure-learn’ loop or the ‘lean loop’.

The loop is a way of iteratively improving our ideas and our predictions about the success of these ideas.

Many traditional research approaches and user testing models are not wholly compatible with this new way of thinking. They can provide insight that takes a long time to generate, is gathered at monthly intervals or longer, and is too discrete or isolated. This can lead to insights that are outdated by the time a report is written. We can also be overconfident about the insights generated, making it hard to iterate our ideas in a quick and agile way.

What we need is an ‘always-on’ approach to research, in the same way customer-focused organisations approach digital content.

Understanding Data in Context

When the lean loop is used to generate insights, often the data is noisy. It can be hard to separate true relationships in the data from coincidences and correlations. This can be because data capture systems are not set up appropriately at the beginning of a project. Also, the tests that are run are not designed in a way that ensures the data captured is ‘clean’ and the relationships are understood. Understanding context when we collect data is essential for creating meaningful insights.

The lean loop approach to research and testing is a form of Bayesian inference and updating. Each time we go through one ‘loop’, we gain an understanding about how successful our idea is (our Bayesian prior). We test this, gather insights, and improve our idea. We then go through the process again, each time we update our prior by assuming that the changes we make to our ideas will improve the chances of success.

Garbage In, Garbage Out

By applying a mixture of techniques, tools and approaches from behavioural science, experimental economics and data science to the lean loop, we can ensure that the insights we generate are as meaningful as possible. This saves time and leads to a more targeted approach to testing.

We can increase our ability to develop actionable data and insight by incrementally improving the inputs at each of the three  stages.

Creating useful inputs, and generating meaningful outputs, can be achieved in the following way:

  • Build: Applying research findings from behavioural economics — as well as customer insight — to your product or process ideas, allows you to create a much better initial hypotheses. At this stage of the process, you should already be thinking about the type of data you want to capture. Putting proper analytics in place at the beginning of the process is essential to collecting useful data.
  • Measure: Principles from experimental economics and social psychology can help us design more robust and innovative research methodologies. This means we will have cleaner, less noisy data. We can then try to isolate the true causal relationships in the data. Understanding the context within which the data is captured means we can create much more meaningful insights.
  • Learn: Using data science analysis techniques gives us greater insight into what our data means. We have many analysis techniques at our disposal, allowing us to pick the appropriate mix of techniques depending on the context.

Changing the way we work

Traditional research, design and strategy departments — as well as the operational and organisational structures that manage them — have been disrupted by the digital revolution. This disruption has caused different rates of change in different business areas.

Things like lean start-up thinking, the lean loop and agile project management are changing the way we work. However, the disciplines of customer insight, research, data analysis and strategy still often work in silos, handing insights over to each other as discrete reports.

Putting data science and behavioural economics together and implementing a framework that creates one ‘test and learn’ team enables continuous, incremental improvements towards well-defined goals in a measurable, logical, and predictable way.


Enter your email below to sign up for more of our latest content or get in touch here if you’d like to speak to us about your data and insights needs.

[hubspot type=form portal=5554216 id=df2ee318-4743-440e-affe-dd29663432b5]