Still catching up on post-election analysis. As you may recall, the final poll of Iowa’s U.S. Senate race by Selzer & Co differed greatly from all other polls taken during the final week of that campaign. Usually “outliers” are inaccurate, but in this case only the Selzer poll was close to predicting Joni Ernst’s final margin over Bruce Braley. This problem wasn’t limited to Iowa; as a whole, U.S. Senate polling in 2014 was less accurate than Senate polling in other recent election years. In three states, the polling average missed the final result by more than 10 points. Polls were skewed toward Democratic candidates in most of the competitive states.
Nate Silver argued last week that “herding” by pollsters contributed to errors in polling the IA-Sen race, among others. “Herding” refers to a pollster adjusting survey results to avoid releasing findings that look like an outlier. You should click through to read Silver’s whole post, but the gist is that as election day approached, IA-Sen polls converged toward a narrow band of findings, showing either a tied race or a small lead for Ernst. Random sampling error should have produced more statistical “noise” and varied results than that. Roughly a third of polls taken should have fallen outside that narrow band.
As I said, sampling error is unavoidable – an intrinsic part of polling. If you’ve collected enough polls and don’t find that at least 32 percent of them deviate from the polling average by 3.5 percentage points,5 it means something funny – like herding – is going on.
It will be interesting to see whether state-level presidential and Senate polls during the 2016 cycle show more variability, or whether pollsters continue the apparent practice of adjusting results to avoid standing out from the crowd. Silver notes that sampling error is just one of many sources of error in polls. Figuring out which people are likely to cast a ballot may be an even bigger problem.
Any relevant thoughts are welcome in this thread.
3 Comments
Huh
I can’t imagine why pollsters would want to avoid releasing a poll that looks different from the others. It’s not like anyone jumped down Ann Selzer’s throat when she released the poll showing Ernst up 7.
Well, other than every Democrat in Iowa.
xjcsa Sat 22 Nov 6:56 PM
it would have been exactly the same
if shoe had been on other foot. Republicans complained about plenty of polls in 2012 that turned out to be accurate, or if anything understating Democrats’ lead.
I honestly thought the race was much closer than that.
desmoinesdem Sat 22 Nov 8:01 PM
Things like math and procedure matter
These guys were professional pollsters. If everyone was displaying actual empirical results, Selzer wouldn’t have seemed out of the consensus. It’s a very credible argument there is consensus when everyone is showing one thing to conclude the lone one is the “outlier”. That’s the nature of outliers. “Every” Democrat wouldn’t have been miffed at the DMR for calling the race improperly right before election day if they were aware that many “professionals” were too afraid to display their actual results. Kudos to Selzer for sticking by their numbers regardless of their popularity. That integrity will be a tangible factor of analysis in future polls. I wonder if their data do show that the poll between the two they declared Joni was clearly leading or if they got nervous on that one because they were outside the consensus.
john-thompson Sun 23 Nov 1:57 AM