We can be very good at answering questions, but why don't we challenge them?

A problem (among many) with data is that many people ask questions that are easy.  How many and who clicked this button? These are easy to ask, occupy time, fill in KPI cards and are often easy to answer. Why do so few kick back to ask if it is the right question?  Why did they click the button? Oh, we don’t have that data!

But we can create constraints that mean we get biased data as we don’t understand human behaviour in context. 

----

In 1973 two behavioural scientists, John Darley and Daniel Batson published "From Jerusalem to Jericho: A study of Situational and Dispositional Variables in Helping Behavior." It was an investigation into the psychology of prosocial behaviour.

Darley and Batson picked students who were studying to be priests at the Princeton Theological Seminary to determine how situational factors influenced prosocial behaviour. Hypothesis: When someone is kind to another, is that because he or she has some innate qualities that lead to kindness—or because some situational factors simply determine and allow for kind behaviours.

The famous study was across three days in late fall; they had a bunch of seminary students come to a building, meet with a researcher, and fill out a bunch of surveys. The surveys partly addressed if the students were religious primarily for intrinsic or for extrinsic reasons (with “intrinsic reasons” being like “I am motivated to do good in the world” and “extrinsic reasons” being like “I really want to get into heaven.”). Then the participants were told that they needed to prepare a brief talk about The Good Samaritan — which is a story about how a hapless victim on the side of the road was just passed by holy individuals, whilst a non-holy Samaritan took the time to stop and help the fellow out. The story's context is significant as the participants were told that they needed to walk to a nearby building to meet up with another team member and then give their sermon. However, using random selection, the student was told that they:

  • Had plenty of time, and were early.

  • Were on time, but should head over now so as not to be late, or

  • We're running late, and really needed to run without delay - no excuse.

Obviously, the situation was rigged, and all participants found a fallen stranger (an actor) in a narrow alleyway who acted sick on the ground and in need of help. The narrow alleyway was selected as there was a choice: help this guy or step over him!

The “time constraint” crafted behavioural change as:

  • 63% of participants in the “early” condition stopped to help the stranger.

  • 45% of participants in the “on-time” condition stopped to help the stranger.

  • 10% of participants in the “late” condition stopped to help the stranger.

60% of the participants were unwilling to help the “victim.” This is ironic because the participants were Princeton students studying to be priests and about to give a talk on the lessons of the Good Samaritan, but this was manipulated because of the constraints.

A side note before the core finding is that dispositional factors (what you believed) had no bearing on helping behaviour. In other words, people who reported as religious for intrinsic reasons were no more likely than others to stop to help.

When it comes to human behaviour, we have a strong bias toward thinking that people do what they do because of internal traits that drive their behaviours (Dunning, Ross & Nisbett, 1990). The Overconfidence Effect in Social Prediction.  Data shows us that dispositional factors are relatively weak predictors of what we do, whilst situational factors (which we cannot see or measure and often seem benign or inconsequential) play a powerful role in shaping our behaviours.


We can only answer the question we have data to, but that does not mean the answer is right or the data is a good predictor because we don't understand the constraints. 

CEO Take Away

If data supports your decisions, who is accountable and responsible for ensuring it answers the question we want and is not just data without context?  In the next board agenda, put an item as part of AOB, “Do we understand situational bias in our data?” If there is no debate or looks of bewilderment, perhaps it is time to ask better questions of those who think that the data is accurate.