The biggest shock of the UK General Election in 2015 was not the collapse of the Liberal Democrats to 8 seats, or Labour’s decimation in Scotland, or the Conservatives achieving a overall majority; the biggest shock was the BBC exit poll. Released at 10pm on May 7th when voting booths closed, the exit poll by Prof. John Curtice defied so many expectations, so many weeks and months of polls showing the Conservatives and Labour neck-and-neck, that no-one could believe it.

The pre-elections polls weren’t just wrong, they were bizarrely and uniformly wrong. The BBC’s poll of polls (margin of error +/- 3) for the last 6 months show Labour and the Conservatives as too close to call (BBC PoP Methodology). Similarly for the Economist, and nearly every other poll tracker. Looking at individual polls instead of merged polls, the only times the polls came close to the end result were in April 29th (IPOS-MORI) and April 26th (Ashcroft), as you can see from UK Polling Report. Initial comments from polling companies were that the results were within their margin of error, and recently individual pollsters are coming out to say ‘Actually, I got it nearly right, ignore my company‘. The truth is more clearly put by Electoral Calculus:

“The actual election showed that support figures were: Con 37.8%, Lab 31.2%, Lib 8.1%, UKIP 12.9% and Green 3.8%, which implies a Conservative lead over Labour of 6.6%. The average of the final polls from all the pollsters only gave a Conservative lead of 0.2% This poll error of about six per cent was the main driver of the prediction error. … the prediction would have been relatively accurate if the polling inputs [to our model] themselves had been more accurate.”

There has been much talk on the methodologies of various polling companies; and reflections on polling companies both on themselves, and from without, so I don’t see any need to repeat them here. Lord Ashdown has attemped to explain the error as due to “people that were coming out to vote that [we] had never seen before… ‘never voters’ who [we] all ignored … until they became visible after they had voted“. This explanation is highly unlikely, given the turnout increased 1% overall in the UK from 2010, and 0.3% in England.

However, they all come back to blame the sample; the poll respondents. It is important to step back and remind ourselves exactly what polling is: asking a person either by face, telephone, or online, a question about their beliefs/intentions, by another person. There are an awful lot of assumptions in that simple definition of polling that psychology has lots to warn about.

Finding all the Shy Tories: ‘If only all those shy Tories just told us they were Tories!’ seems to be the refrain. Why are ‘Shy Tories’ are unwilling to state their preferences for the Conservative Party, or at the least state it to a strangers? They may want to avoid being seen negatively by the pollster, or to be associated with a negative group (the ‘Nasty Party‘) or outcome (‘destroying the NHS‘), or they may want to be seen as ‘going with the crowd’ locally (Conservatives living in ethnically diverse/liberal, areas like London).This reluctance to be seen as different to a perceived norm is the basis of the ‘social desirability bias‘. Shy Tories may also just respond as polite ‘Don’t Knows’, which ultimately get re-weighted as support for other parties- which isn’t useful if a large portion of ‘Don’t Knows’ are already ‘leaners’ or ‘shy’ supporters of a party.

This problem was already encountered in Obama’s campaign in 2008. How do you accurately account for support for a Black President from typically white Republican-leaning demographic areas? There have been quite a few methodologies employed to gauge support and the social desirability bias, the most popular being the ‘list experiment’ by  Kuklinski, Cobb, and Gilens (1997); which has been used in testing for affirmative action programs (Sniderman and Carmines, 1997), support for biological understandings of race (Brueckner, Morning, and Nelson, 2005), attitudes toward Jewish political candidates (Kane, Craig, and Wald, 2004), support for a female presidential candidate (Streb et al., 2008), and support for a black president (Herrwig & McCabe, 2009).

Redlawsk, Tolbert & Franko (2010) throw up a variety of methodological problems with carrying out list experiments and suggest their own method, but across all there is the problem of the effort involved, which makes it unlikely to be done by non-academic polling companies.

Getting around the Bias: There are a few ways companies can use psychology to minimise their error. Instead of divvying up Undecideds by some weighing factor (usually the previous General Election vote share). They could ask respondents simple attitude questionnaires, such as ‘thermometer ratings’ of parties/candidate, or support for issues associated with the two main parties, and weight ‘Don’t Knows’ probabilistically based on this. Research has shown that assessing explicit attitudes in this way are far more successful in predicting Decided and Undecided voters’ choices correctly, than implicit attitude measures as suggested by some (e.g. by using Implicit Attitude Tests), and are less cumbersome.

A simpler, and arguably more useful methodology is simply to ask respondents “[What way/For whom], do you think, would your neighbour vote?“. While perceptions of a neighbour’s expected vote might not always perfectly predict the respondent’s vote (although research shows it has an effect), it can indicate where people are hiding their true preferences. In a USA Gallop poll in 2005 about voting for a female President (Strebb et al., 2008), 86% answered that they would vote for a “qualified woman for president” while 34% said that “most of my neighbors” would not vote for a female president. Clearly if we asked this of 100 people in the neighbourhood, someone has to be concealing their true beliefs.

Armed with a probable discrepancy in the overt vs. neighbour attitudes, one can reweight the data in a more careful and conservative way; taking into account previous voting history (if polling a certain area) and respondent demographics, and assigning voting intentions in a more probabilistic fashion. In this we start building a measure of confidence in the reliability or credibility of our sample.

There are other insights from psychology that might improve polling people’s voting intentions, for instance around questions phrased as ‘thinking’ or ‘feeling’. Research on political ‘leaners’ in the USA found replacing ‘think’ phrases with ‘feel’ phrases (“What party do you think/feel you are closest to?“) to be more accurate in identifying leaners. It is useful to note that all UK polling companies ask variants of the the same question: “If there was a General Election tomorrow, which party do you think you would vote for“. Asking how people feel, or think, about something, activates two differing psychological processes and may lead to two different answers.

An interesting thing to examine is whether the Shy Tory vote from 1992 (where the polls were out by 9.2%) is related to the Shy Tory vote in 2015 (e.g. are discrepancies in similar constituencies, or more related to certain voter profiles).

Voting for the Good of the Country vs. Oneself: One of the explanations for the poll-outcome mismatch I’ve heard is that, when asked whom they might vote for, people tend to consider their vote as what is best for themselves (‘self-interest’), but when it comes to the booth, vote in line for what is best for the country. Assessing this is done by retrospective considerations, and thus falls prey to the same problems of desirability bias as before; in that people may not want to be seen as voting ‘selfishly’, and also people may not want to have been seen as voting for the losing party. It also doesn’t account at all for the explanations as to why there was a mismatch, unless voters who considered themselves personally better off under Labour, felt that the country would be better off under the Conservatives. Not only is this an up-ending of most (quasi-) rational voting models, but were it true we might have seen differences in questions run by YouGov, which asked respondents to say which party best reflects the phrase “The kind of society it wants is broadly the kind of society I want”. Unless of course, respondents didn’t want to be seen as imagining a society designed by the Conservatives.

The simplest explanation is that voters didn’t need to make this choice: issue importance ratings procured by YouGov asked respondents to consider what they felt were the important issues facing their family, and those facing the country; with ~44% and ~52% choosing the economy as important in both cases respectively. The Conservative Party were consistently considered the best party on this issue; typically 18 percentage points higher than Labour. Labour were only seen as better than the Conservatives on the NHS, but the NHS was not considered more important than the economy either at the country or family level.

Polls as ‘Self-Fulfilling Prophecy’, or ‘Heralds of Doom’: For Labour supporters, the polls were a best case scenario; noone really believed they could pull off a majority based on the polls. For Conservative supporters and ‘leaners’, the poll results indicated a minority Conservative Party unable to govern without the Liberal Democrats and potentially the DUP in coalition, or a Labour Party government supported in some form by the Scottish Nationalist Party- a ‘nightmare scenario’. Its thus important to split the polls’ effects here on the electorate into two parts: a ‘bandwagon effect‘, and ‘motivation‘.

The ‘Bandwagon Effect’ is one of the most studied effects in psychology, synonymous with ‘herding’ or following the crowd. Polls act as to provide a snapshot of public opinion at any one time, but in turn can affect individual-level attitudes. Simply, people use polls as cues to the majority opinion, and are likely to believe/follow that majority opinion (see Rotshchild & Malhorta, 2014). This isn’t entirely irrational or ‘stupid’ by voters, particularly if you are someone with low interest in, or doesn’t want to spend time on, politics. The heuristic of ‘if its good enough for everyone else, its good enough for me’ is useful, in that the antelope that stops to question the wisdom of the crowd in running tends to be the one eaten. Majority opinion implies group knowledge not confined to any one individual and implies a degree of consensus (which acts as a proxy for opinion credibility), and leads to conformity to the majority opinion. In this polls can be self-fulfilling.

Given a probable outcome, how do people decide who to vote for in the booth? Prospect Theory (Kahneman, 1979; 2011) is our old friend here, which concerns how people will choose risky alternatives where the outcome probabilities are known. Of particular interest is the fairly well-established principle of ‘loss aversion‘, the tendency for people to strongly prefer avoiding losses to acquiring gains. The public opinion polls never reflected that people thought they might be better off under Labour, and with most of the UK’s papers amplifying the the message to not ‘risk it with Labour’, the result is fairly academic.

With loss aversion, and the perceived tightness of the polls, there was far more motivation for people to go to the polls to support the Conservatives over Labour; among the Conservative core-vote, the shy Tory ‘Don’t Know’s, and the true ‘Undecideds’; as to reduce uncertainty, and avoid the perceived larger risk with uncertain pay-offs implied by a Labour minority/-SNP government; in favour of certainty and a possibly lower (or even a slightly reduced) payoff with the Conservatives.

Labour didn’t benefit from any late poll surge that might cascade into convincing people voting for Labour might overturn a Conservative advantage, or limit the influence of the SNP if they were in Government. Similarly for Conservatives, the fact that the polls never indicated a probable outcome where they would achieve an overall majority likely decreased the perceived need for other parties to tactically vote for Labour, and for disaffected supporters to stay with they party. 
For the Conservative Party, the polls were perfect.

Where Next? From a psychological perspective, it is hard not to support the idea of banning the publication of public opinion polls 3 weeks before the election and let the public be influenced in-so-much-as-possible by only the informational content of the manifestos. As a scientist however, its hard to state support for this when we are unsure if the polls exert greater stronger biasing influences than those from family, neighbours, party leader images, newspapers etc. Where does one start?

Yet, there is a very clear danger from the possibility of fraudulent polling, or from cynically using polling results; such as publishing or using polls in such a way as to suppress a party’s support base; by discouraging turnout or encouraging switching to another party (as done on a small scale by the Liberal Democrats), or lulling parties into a false sense of security. People are right to question people like the born-again pollster Lord Ashcroft, a Conservative peer, who buys in polling from other companies and publishes the results, but won’t reveal who the companies are. Lord Ashcroft is not a member of the British Polling Council either, the only attempt at a regulatory body for polling. It is also worth pointing to his blog where he has a speech and video he “gave last night at the Post-Election Conference, jointly hosted by Conservative Home, the TaxPayers’ Alliance, Business for Britain and the Institute of Economic Affairs”. On numerous occasions I have seen the results of Ashcroft’s polls in key seats on the faces of activists of all political parties, so to say they do not have psychological influence does Ashcroft a disservice.

Finally, there is an obvious question to be answered: why when people emerge from the polling booth are they so willing to tell BBC exit pollsters, ‘Yes, I voted Conservative’, but not the pollsters before the election day? Maybe it is time for an experiment?

 

Note: if you can’t obtain any of the sources or papers linked in this article, email me and I’m happy to provide my copies.

UK General Election 2015: Psychology & Polls
facebooktwittergoogle_plusredditpinterestlinkedinmail
Tagged on:                 

Leave a Reply

Your email address will not be published. Required fields are marked *