Oh shit. How qualitative research could have saved election day?

13.05.2015

Written By: Jim Mott

Tags: , , ,

So the final count is in, the dust is settling (ish) and the UK is hunkering down into another 5 years of jolly old austerity politics. Hopefully it won’t take too long for the losing parties to dust themselves off and get back into the fight. But before that they will obviously want to spend a bit of time pointing fingers and blaming other people for what went wrong.

Fortunately this time round they’ve got themselves a pretty easy target to gun for in the shape of the pollsters and their wildly inaccurate predictions for how the parties were going to shape up this time round. Indeed the last time the pollsters messed things up this badly was also the last time Labour had a spectacular implosion on election-day back in 1992.

finger_pointing_qualitative_research

Other than the exit poll, which gave a pretty respectable prediction of how things would turn out on the night, polls from all of the other major players were predicting a mere 1 percentage point difference between the two main parties with things far too close to call. When the results were finally in and it was clear things had gone horribly wrong, Director of ICM Martin Bloom bravely took the stand on Twitter to speak for the entire industry. oh_shit_qualitative_research

Now as then, there is a massive review underfoot. The British Polling council have joined up with the Market Research Society to order an independent investigation in an effort to patch things up, but regardless of what they find out this is undoubtedly a massive PR disaster for the polling industry and perhaps market research as a whole.

For us this is a call for a more joined up approach to any kind of ‘big data’ type study. As qualitative researchers we always find it rather troublesome when there is an attempt to map out what is essentially a very emotional landscape without actually spending some proper time with real people.

This is not to say that polls have no role to play, but rather that an overreliance on hard quantitative ‘data’ had the unfortunate effect of reducing the swirling maelstrom of tribal passions, disenchantment, confusion and indecision that characterized this election campaign to neat percentage points that ever so gently jiggled up and down as the weeks rolled by. The gap between intention and action is always a troublesome one to predict and in such a charged campaign as this one relying on simply asking people their voting intentions clearly fell flat on its face.

The polls have worked fine in the past because they were able to offset the imbalance between voting intention and action with previous data on actual voting behaviour to generate a margin of error. This time round with the introduction of so many new and emergent factors (the rise of UKIP, the nationalist mood in Scotland, the prospect of another hung parliament) the whole landscape has changed meaning previous behaviour doesn’t really work as a reliable benchmark.

This is where qualitative research could have fleshed out the landscape: by actually spending time in constituencies where a swing vote was likely. Understanding the social and emotional influences that were at play, talking to people about their attitudes towards politics and voting, looking at decision making in a broader context and perhaps even generating some mock-up polling booth style exercises would have given the pollsters more of a behavioural read on how things would have played out on the day.

Beyond that it might have acted as something of a palliative for the politicians themselves whose own reliance on the polls led to a campaign that was characterised by the utter absence of any connection between the main candidates and real people.

Drafty warehouses, carefully choreographed meetings with handpicked members of the ‘general’ public and neat rows of ‘supporters’ waving pre-approved placards has been the order of the day. Screened, roped-off and vacuum-packed into little pro-party echo chambers, all of the main party leaders did their best to avoid the buzz kill of an angry constituent encounter. Now it looks like exactly that kind of encounter might have rung some alarm bells that things were not going the way the polls were predicting.

As we steam away into the bold new future of ‘big data’ one hopes May 7th might well serve as the moment when it became clear that replacing people with data points is one thing, but forgetting to look at the connections that hold those points together is another thing entirely.

By clicking "Accept All Cookies", you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. Cookie Notice.

Accept All Cookies