-
Editorials
- Bradenton Herald
- Daytona Beach News-Journal
- Florida Times-Union
- Florida Today
- Ft. Myers News-Press
- Gainesville Sun
- Lakeland Ledger
- Miami Herald
- Naples Daily News
- NWF Daily News
- Ocala Star-Banner
- Orlando Sentinel
- Palm Beach Post
- Pensacola News Journal
- Sarasota Herald-Tribune
- TCPalm
- Sun-Sentinel
- Tallahassee Democrat
- Tampa Bay Times
- Columnists
- Cartoons
-
Press Releases
- Sayfie Review
- FL Speaker of the House
- FL Agriculture Commissioner
- FL Senate President
- FL Governor
- US Senator Rubio
- FL Attorney General
- US Senator
- FL CFO
- Congressional Delegation ≻
- Matt Gaetz
- Neal Dunn
- Kat Cammack
- Aaron Bean
- John Rutherford
- Michael Waltz
- Cory Mills
- Bill Posey
- Darren Soto
- Maxwell Frost
- Daniel Webster
- Gus Bilirakis
- Anna Paulina Luna
- Kathy Castor
- Laurel Lee
- Vern Buchanan
- Greg Steube
- Scott Franklin
- Byron Donalds
- Sheila Cherfilus McCormick
- Brian Mast
- Lois Frankel
- Jared Moskowitz
- Frederica Wilson
- Debbie Wasserman Schultz
- Mario Diaz-Balart
- Maria Elvira Salazar
- Carlos Gimenez
- Political Links
-
News Links
- Drudge Report
- NewsMax.com
- AP Florida News
- ABC News' The Note
- NBC News' First Read
- Florida Channel
- Florida TV Stations
- Florida Radio Stations
- Capitol Update
- Florida Newspapers
- Florida Trend
- South Florida Business Journal
- Tampa Bay Business Journal
- Orlando Business Journal
- Jacksonville Business Journal
- News Service of Florida
- Politico Playbook
- Washington Post The Daily 202
-
Research
- Florida Fiscal Portal
- Search Florida Laws
- Search House Bills
- Search Senate Bills
- Search County, City Laws
- Search County Clerks' Records
- Cabinet Agendas, Transcripts
- Search Executive Orders
- Search Atty. General Opinions
- Search Supreme Court Docket
- Florida Supreme Court Rulings
- Search Florida Corporations
- Search Administrative Rules
- Proposed Administrative Rules
- View Advertised Contracts
- Refdesk.com
- Government Services Guide
- Electoral Vote Map
-
Reference
- Florida House
- Florida Senate
- Find Your Congressman
- Find Your State Legislator
- Find Your Local Officials
- Find Government Phone #'s
- Florida Agencies
- Florida Cities
- Florida Counties
- Florida Universities
- County Tax Collectors
- County Property Appraisers
- County Clerks of Court
- County Elections Supervisors
- MyFlorida.com
- OPPAGA
Sun-Sentinel: The hole deepens in Delray's code enforcement division | Editorial
Sun-Sentinel: Shaker Village bailout looks like a done deal — and still a bad one | Editorial
Herald: Two mysteries remain unsolved after Trump's Mar-a-Lago press conference | Opinion
Herald: Buildings are sinking in Miami-Dade, and we have a lot of questions | Opinion
Herald: Biden couldn't sell a Haiti intervention. Trump can, and should | Opinion
Sun-Sentinel: Sen. Lara Trump would be the worst kind of nepotism | Editorial
Sun-Sentinel: The people have spoken: Let Orlando prosecutor serve, again | Editorial
Sun-Sentinel: The anachronistic Electoral College gathers again | Editorial
Herald: Daylight saving time year-round? Careful what you wish for, Sunshine State | Opinion
Sun-Sentinel: Delray finally shuts down a cut-through headache | Editorial
Sayfie Review Featured Column
The Contest to Predict the Presidential Winner in Florida Was as Close as the Race Itself
by Dr. Aubrey Jewett
January 12, 2017
The Contest to Predict the Presidential Winner in Florida Was as Close as the Race Itself
Aubrey Jewett, PhD
University of Central Florida, Department of Political Science
The 2016 presidential election is over and Donald Trump is the official President-elect of the United States. Florida once again had the most electoral votes up for grabs among all battleground state. The Sunshine State awarded 29 electoral votes to Donald Trump after he beat Hillary Clinton by a little over one percent of the popular vote (48.6 to 47.4), reversing the one point Florida win by President Obama in 2012. Trump’s victory shocked many Floridians. Much of this surprise was based on pre-election polls that suggested Hillary Clinton had a lead in national polls and was ahead in enough battleground state surveys (including Florida) to win the electoral college. Most web sites that aggregate surveys and produce poll averages predicted that Trump would lose overall as well.
As it turns out, the national polls accurately predicted Clinton’s national popular vote margin, although most overestimated her actual two-percent win by one to four percentage points. In contrast to the national polls, state polls and poll aggregation websites were off in their predictions for several key states in the Rustbelt that had voted Democratic in presidential elections since the 1980s (namely Wisconsin, Michigan, and Pennsylvania). And while Florida was always considered a toss-up state, most polls and a number of poll aggregation websites also seemed to point to a Hillary Clinton win in the Sunshine State. Of course in the end, Trump won those three Rust Belt states and Florida and thus the Electoral College.
This research assesses the accuracy of four popular poll aggregation websites based on their ability to predict the presidential winner in Florida. The four websites and models compared include Real Clear Politics’ RCP Poll Average, Talking Points Memo’s PollTracker, FiveThirtyEight’s Projected Vote Share and Huffington Post’s HuffPost Pollster. All four websites are freely available, fairly well known to people who follow politics, cut across the ideological spectrum, have relatively high web traffic (all within the top 1600 websites based on Alexa rankings in the United States), made final predictions for the presidential race in Florida (and have done so for multiple elections), used divergent polls and methodologies to come up with their predictions, and ended up with different predictions as to who would most likely win Florida.
In the end, all four websites predicted the presidential race in the Sunshine State would be close, but only one website correctly predicted that Donald Trump would win Florida: Real Clear Politics. Talking Points Memo finished in a close second place predicting a tie in the Florida. FiveThirtyEight and Huffington Post finished in third and fourth place respectively in the Sunshine State accuracy contest since they predicted a Clinton win by a small or modest margin. Below, the final prediction, methodology, and sample of individual polls tracked over the last week by each website are discussed.
Real Clear Politics
Real Clear Politics’ final RCP Poll Average predicted that Donald Trump would win Florida by 46.6 to 46.4 percent – or a razor thin margin of just two-tenths of one percent. Figure 1 shows that according to the RCP Poll Average, the race in Florida was neck and neck with 10 lead changes over the final five months with Trump surging starting on October 15th, taking the lead by October 31st, losing the lead on November 3rd, and only retaking the lead again on November 7th – just 24 hours before Election Day.
Real Clear Politics uses a fairly straightforward methodology (summarized in Figure 2) although a stand-alone link or page that explains the methodology does not appear on the website. The final RCP Poll Average is literally the statistical mean or average of the final polls included in their calculations. They simply add up the numbers for Clinton and Trump and then divide by the number of polls included to get the average for each. Real Clear Politics only includes publically published polls that they deem to be valid (they occasionally are criticized for not being more clear about their criteria for inclusion). They only use surveys that rely on a random sample as traditionally defined and thus primarily include surveys that use land lines and cell phones to solicit opinion and do not use most internet-based polls which tend to be self-selected. For the final poll average, they only included polls that were released in the final week of the race in November, and if a pollster released more than one poll over that span, they only used the last poll released.
Thus the final RCP Poll Average included seven polls: four had Clinton ahead by just one-two points, one called it a tie, and two had Trump winning by either three or four points. The polls used in the final RCP Poll Average are displayed in Figure 3. CBS News / YouGov was the only online panel survey in the final seven and although completed online, it included a number of respondents selected at random from voter registration lists and contacted by telephone first.
Figure 1
Real Clear Politics Final Florida Poll Average
Figure 2
Real Clear Politics Final Poll Average Methodology
(As Summarized by Professor Jewett)
Figure 3
Polls Used to Calculate Final Florida Real Clear Politics Poll Average
Talking Points Memo
Talking Points Memo’s PollTracker called the race a tie in their final estimate on November 5th: 46.6 to 46.6 percent. Figure 4 shows that PollTracker had Hillary Clinton ahead during the entire last five months of the race. Trump surged in late August and almost achieved a tie by early September but then faded as Clinton pulled away. Trump made a final push starting October 21 and caught up on the last weekend before Election Day. Figure 5 contains the methodology used by Talking Points Memo as found on its website.
Like Real Clear Politics, PollTracker primarily relies on surveys using traditional random sampling done by phone and does not include internet polls in its calculations (however like Real Clear Politics they did include CBS News/YouGov). As Figure 6 notes, Talking Point Memo does post the results of most internet based polls on their website so that people can see those results as well. However, Talking Points Memo differs from Real Clear Politics in that it uses regression analysis to calculate its daily averages rather than simply taking the arithmetic mean of the most recent polls. Regression is a statistical technique that helps create a line of “best fit” to a set of data points. PollTracker continually adds the newest poll results to the older poll numbers to update the current state of the race and so data from all previous polls are included each time the new average is updated.
This is the major difference between Talking Points Memo and Real Clear Politics which just takes the average of the most recent polls when it updates the state of the race and completely drops older polls out of the calculations. Regression analysis tends to smooth out the fluctuations in the data and helps explain why with PollTracker, even when relying on mostly the same polls, Clinton appears to hold a steady lead in Florida for five months straight until the very end, compared to the RCP Poll Averages which saw ten lead changes over that same period of time.
Figure 4
Talking Points Memo Final Florida PollTracker Estimate
Figure 5
Talking Points Memo Final PollTracker Methodology
(As Explained by their “FAQ” and “Methodology” Drop Down Boxes)
Figure 6
Latest Polls Tracked to Calculate Final Florida Talking Points Memo PollTracker Average
(Regression Analysis Used to Calculate Average Included Data from All Earlier Polls)
(Polls with ** Were Not Included in the Average)
FiveThirtyEight
FiveThirtyEight predicted Hillary Clinton would win Florida by less than one percentage point: 48.1 to 47.5% (they also calculated that Clinton had a 55.1% chance of winning on Election Day). Figure 7 indicates that the race was close over the last five months in Florida, with Hillary ahead most of that time by a small margin (never larger than five points), but with Trump occasionally taking a small lead in late July, mid-to-late September, and for about four days in early November.
In fact, FiveThirtyEight indicated Trump had the lead in Florida as late as November 5th, but in its final November 7th projection Clinton took the lead by sixth-tenths of one percent. Figure 6 presents a simplified version of FiveThirtyEight’s relatively complex regression model used to calculate its forecasts. FiveThirtyEight starts with the actual results of published polls but then “the model weights each poll by its sample size, how recently it was conducted, and the historical accuracy and methodology of the polling firm.” Next it adjusts the polling data for likely voters, omitted third parties, trend line, and house effects (the tendency of pollsters to have results that lean towards one party or the other over time). Then undecided and third-party voters are allocated to the adjusted polls. This is the “Polls Only” model. FiveThirtyEight also calculates a “Polls Plus Forecast” which includes state demographic, economic and historical information into the Polls Only model (in Florida for 2016, adding these additional factors did not change the Polls Only percentages at all).
Unlike Real Clear Politics and Talking Points Memo, internet based polls are also included in the FiveThirtyEight model along with traditional telephone surveys. As Figure 9 shows, a total of 148 polls were used in the final Florida model from a large number of pollsters employing a variety of survey methods and each was weighted, adjusted and then included in the final FiveThirtyEight projection.
Figure 7
FiveThirtyEight Final Projected Florida Vote Share and Chance of Winning
Figure 8
FiveThirtyEight Final Projected Vote Share and Chance of Winning Methodology
(See Below for a Description of How the Polls Were Weighted)
Figure 9
Latest Polls Used to Calculate FiveThirtyEight Final Projected Florida Vote Share
(Earlier Polls Were Also Used, But Were Given Less Weight)
Huffington Post
Huffington Post predicted Hillary Clinton would beat Donald Trump in Florida by almost two percentage points: 46.8 to 45.0. Figure 10 illustrates that Clinton held a steady lead over the last five months of the race in Florida with Trump never once pulling ahead. HuffPost Pollster suggested that Trump never pulled within three percentage points until late October and, that even then, Clinton held a steady two percent lead over the final two weeks. Figure 11 gives a brief summary of the methodology used to calculate the HuffPost Pollster Trend Line Estimate.
Like Talking Points Memo and FiveThirtyEight, Huffington Post uses regression analysis to estimate the averages for each candidate and thus incorporates data from older polls each time it creates a new estimate (with newer surveys given more weight). However, the HuffPost Pollster model is relatively unsophisticated compared to FiveThirtyEight in terms of weighting and adjusting polls. Additionally, HuffPost Pollster differs from the other three websites in terms of which polls are included.
Unlike Real Clear Politics and Talking Points Memo, HuffPost Pollster included internet polls (as did FiveThirtyEight). Also noteworthy, new to this election cycle, HuffPost Pollster employed a slightly tighter standard for including polls and thus excluded a number of automated land-line-only surveys and a few other surveys that had been included in previous years. Figure 12 lists the latest polls used by HuffPost Pollster in their regressions to calculate final estimates for each candidate.
Figure 10
HuffPost Pollster Final Projected Florida Trend Line Estimate
Figure 11
HuffPost Pollster Trend Line Estimate Methodology
(From HuffPost Pollster FAQ with a Note from Dr. Jewett)
Figure 12
Latest Polls Used to Calculate Huffpost Pollster Final Florida Trend Estimate
(Earlier Polls Were Also Used, But Were Given Less Weight)
Methodology Affected Presentation, Predictions and Perceptions
Like the actual race between Donald Trump and Hillary Clinton in Florida, the contest between these four websites and their models to accurately forecast the winner was extremely close (see Table 1). However, in the end, only Real Clear Politics correctly predicted that Donald Trump would win the Sunshine State. None of the models were terribly off in terms of statistical precision with even the “worst” performer coming within three points of the actual margin of victory.
Still, important differences in methodology did shape the presentation of the race over time, the actual predicted results, and the way that reader expectations were formed on Election Day. If someone only followed Huffington Post over the final five months of the race they would almost certainly think that Hillary Clinton was going to win Florida since was ahead in the Pollster average the entire time and Trump never seemed to close the gap. Thus Huffington Post readers were probably surprised if not shocked that Donald Trump won Florida since HuffPost Pollster put the odds of Clinton winning Florida at 96.8% and her chances at becoming the first female president at 98%. The same could be said to a slightly lesser degree for readers of Talking Points Memo, where PollTracker portrayed Clinton with a modest but enduring lead for months with Trump only gaining momentum at the very end and surging to a tie.
Conversely, a regular reader of FiveThirtyEight would most likely perceive the race as quite close with Clinton and Trump exchanging the lead several times since June including the last few days of the elections when the state seemed to flip from Trump to Clinton. And a faithful reader of Real Clear Politics would probably not have been stunned by the results in Florida at all, as it was presented as the ultimate battleground with Clinton and Trump switching leads ten time over five months and Trump only narrowly pulling ahead in the final days.
Table 1
Prediction Accuracy in the 2016 Florida Presidential Race:
A Comparison of Final Results from Four Popular Poll Aggregation Websites
Methodological decisions most likely explain the differences in the portrayal of the race over time and the final predicted results. The initial decision on what polls to include likely impacted the presentations, predictions, and perceptions of the race. Table 2 shows that for the 2016 election in Florida, published polls using traditional random samples (mostly done through land line and cell phone) seemed to be more accurate than internet polls that relied on self-selection. When comparing the simple average of the final surveys for Florida, random sample surveys predicted a .2% Trump victory while online surveys done with non-random samples had Clinton winning by 1.5%. This might have occurred because younger voters are more likely to participate in internet surveys, tend to lean Democratic, and did not show up at the same levels for Hillary Clinton as they did for Barack Obama.
It is also possible the surge in white exurban, rural, and working class voters who generally supported Trump in 2016 might have been more accurately captured in traditional phone-based polls. Or it could be the fact that two of the traditional phone polls that favored Trump in the final week were done by Republican leaning firms whose weighting produced a more Republican sample that ended up being more reflective of the actual voter pool that cast ballots. Whatever the exact reason, the two less accurate websites in 2016, Huffington Post and FiveThirtyEight, included internet based polls with self-selected participants, while the two more accurate sites, Real Clear Politics and Talking Points Memo largely excluded online polls and relied on surveys that used random samples.
Table 2
Prediction Accuracy in the 2016 Florida Presidential Race:
A Comparison of Final Published Polls Using Random and Non-Random Samples
In addition to determining which polls to include, the other methodological decisions employed by the websites almost certainly account for the rest of the difference in the way the race was presented over time and final predictions. Real Clear Politics’ simple poll averages based only on the most recent polls are likely to reflect changes in the race more quickly and dramatically then Huffington Post’s and Talking Points Memo’s basic regression analyses that tend to smooth out the ups and downs of poll swings. FiveThirtyEight’s more sophisticated model that weights and adjusts polls on a variety of factors and also employing regression explains why it displayed several lead changes in the race over time, but not as many or as sharply or as quickly compared to Real Clear Politics.
In terms of the final results, this may also help explain the ability of Real Clear Politics to capture the late breaking swing to Trump (Florida exit polls showed that the 11% of voters who made up their mind in the final week swung heavily to Trump 55%-38%). Because the RCP Poll Average drops older polls and only includes the latest polls from any pollster, late swings may be captured more fully and quickly by their model.
It is possible, but unlikely, that partisanship explains the differing results. Huffington Post is the most popular politically progressive website in the United States and they predicted Hillary Clinton was ahead for months and would win Florida. Talking Points Memo is also a liberal website, and while they did portray Clinton ahead for most of the race, they also called it a tie at the end. Conversely Real Clear Politics is considered by some to be conservative (its co-founders are conservative but the site itself generally contains a diverse mix of ideological voices) and they were the only site to predict a Trump victory in Florida.
However, they also showed a competitive horserace for months with Hillary often in the lead including, almost, to the very end. FiveThirtyEight (and its founder and chief statistician Nate Silver) have been accused of political or statistical bias by both the left and right, but the race was predicted as a literal toss-up throughout the cycle with several lead changes including a switch at the very end resulting in a small probability for a Clinton win. What’s more, the partisanship explanation falls flat when looking at the results from 2012.
Looking Back, Looking Forward
While Real Clear Politics was the most accurate of the four poll aggregation websites in its final 2016 Florida prediction and Talking Points Memo a close second, there is no guarantee that they will repeat in the next election. In fact, in 2012 when Obama won Florida by slightly less than one percent, Real Clear Politics incorrectly predicted a Romney victory by 1.5% and Talking Points Memo a 1.2% victory for Romney. Alternatively, in 2012, Huffington Post predicted a tie in the Sunshine State and FiveThirtyEight correctly flipped its prediction from Romney to Obama on the final evening before Election Day (just the opposite of what happened in 2016 when their prediction changed on the last day to the incorrect side).
Ultimately the data and analysis suggest two things for fans of Florida politics in the future: first, Florida is the premier battleground state, will likely remain a toss-up for at least several more presidential elections, and often will be very difficult to predict accurately beyond saying “it’s close”; and second, if you are a fan of following the presidential election “horserace” in Florida, you will get the most accurate view of the race (and perhaps avoid some election night shock and trauma) by tracking the contest on a variety of poll aggregation websites rather than relying on just one.