In other cases, the conventional wisdom has flip-flopped without journalists pausing to consider why they got the story wrong in the first place. Sorry, your blog cannot share posts by email. We even got into a couple of very public screaming matches with people who we thought were unjustly overconfident in Trump’s chances. Moreover, we “leaned into” this view in the tone and emphasis of our articles, which often scolded the media for overrating Trump’s chances. This average reflects some states (such as Wisconsin) where Trump beat his polls by more than 2.7 points, along with others (such as Nevada) where Clinton beat her polls. And at several key moments they’d also shown a close race. But they won’t be easy to correct unless journalists’ incentives or the culture of political journalism change. National journalists usually interpreted conflicting and contradictory information as confirming their prior belief that Clinton would win. ... Biden would still "probably hold on" and win key states that Hillary Clinton lost in 2016 by narrow margins. My view is that we had lots of problems, but that we got most of them out of the way good and early by botching our assessment of Trump’s chances of winning the Republican primary. Trump made a mockery of the predictions of all the erudite analytical election forecast modelers. II, six forecasting models tracked by The New York Times, very, very deep dive into the Pennsylvania data, still had a number of obstacles to overcome, struggles to excite among millennial voters, wrote of a potential âpopulist revoltâ against Clinton, expanding Republicansâ strategic options, documented Trumpâs support among senior citizens, profile of life inside the Trump âbunkerâ, signs of poor turnout for Clinton among black voters, gave four pollsters the same data and got four different results. While FiveThirtyEight’s final “polls-only” forecast gave Trump a comparatively generous 3-in-10 chance (29 percent) of winning the Electoral College, it was somewhat outside the consensus, with some other forecasts showing Trump with less than a 1 in 100 shot. On Nov. 1, Karen Tumulty and Paul Kane described how Clintonâs email problems — brought back to life by the Comey letter — were, Bloomberg often provided good reporting on Trumpâs data operations — taking them more seriously than other news outlets — including this Oct. 27, Not every article from The New York Timesâs political desk was a misfire. That is, theyâre highly relevant for forecasting future presidential and midterm elections, but probably not for covering other sorts of news events. Recommended to you based on your activity and what's popular ⢠Feedback Technically speaking, Trump ended the day on July 30 with a 50.1 percent chance of winning in our polls-only forecast. Few major news organizations conveyed more confidence in Clinton’s chances or built more of their coverage around the presumption that she’d become the 45th president. Since the logistic regression is a better choice, Iâll assume he is using that. But you couldnât really pretend that youâd put Trumpâs chances at 40 percent instead. Hillary Clinton (577) One final ground rule: The corpus for this critique will be The New York Times. The Real Story Of 2016 (12) There is only one person who correctly forecast the U.S. presidential election of 2016. Never mind, for a moment, that these states wouldn’t have been enough to change the overall result. Not accounting for defections from faithless electors. President. But for better or worse, what we’re saying here isn’t just hindsight bias. Nate Silver's predictions and polling data for the 2016 presidential election between Hillary Clinton and Donald Trump. Meanwhile, he beat his polls by only 2 to 3 percentage points in the average swing state.3 Certainly, there were individual pollsters that had some explaining to do, especially in Michigan, Wisconsin and Pennsylvania, where Trump beat his polls by a larger amount. Independent evaluations also judged FiveThirtyEight’s forecast to be the most accurate (or perhaps better put, the least inaccurate) of the models. It’s fair to question Clinton’s approach, but it’s also important to ask whether journalists put too much stock in the Clinton campaign’s view of the race. The Times, which hosted FiveThirtyEight from 2010 to 2013, is one of the two most influential outlets for American political news, along with The Washington Post. His name is not Nate Silver or Sam Wang or Nate Cohn. To be clear, if the polls themselves have gotten too much blame, then misinterpretation and misreporting of the polls is a major part of the story. But we’ve already covered these modeling issues at length both before and after the election, so I won’t dwell on them quite as much here. And I don’t expect many of the answers to be obvious or easy. Conservative-leaning sites like the National Review often provided excellent coverage of the campaign. After all, having made his reputation as a statistical wunderkind by predicting 49 states correctly in the 2008 race, Silver called five states wrong in the 2016 election, assuming Hillary Clinton would end up with 302 electoral votes (she got 232). It was about 3 points in 2016. Specifically, Trump beat his FiveThirtyEight adjusted polling average by a net of 2.7 percentage points in the average state, weighted by the stateâs likelihood of being the tipping-point state. He makes the case for either a large or small impact, and leans personally to a small one, which dropped her lead in swing states from 4.5 points to just 1.7 points a couple days before the election. [��_��1��n���7���K翌_������cZ/.��E:cdw۷~�]F7��. 2: These articles will mostly critique how conventional horse-race journalism assessed the election, although with several exceptions. Silver has spoken in the past about how Silver's forecasts would anger their sources, including some in the Romney camp during the 2012 election. Throughout the campaign, the polls had hallmarks of high uncertainty, indicating a volatile election with large numbers of undecided voters. When FiveThirtyEight Editor-in-Chief Nate Silver is not busy getting election predictions wrong, he tweets things such as this largely irrelevant statistical observation about new COVID-19 cases: It is Donald Trump. An article it published on Nov. 1 smartly focused on, Elsewhere at the Times, Nate Cohn at The Upshot provided a number of excellent analyses, including a Sept. 20 article that, And from the start of the general election onward, Sean Trende at RealClearPolitics. Nate Silverâs FiveThirtyEight uses statistical analysis â hard numbers â to tell compelling stories about elections, politics, sports, science, economics and lifestyle. It’s a somewhat fuzzy distinction, but important for what lessons might be drawn from them. To some of you, a forecast that showed Trump with about a 30 percent chance of winning when the consensus view was that his chances were around 15 percent6 will self-evidently seem smart. All rights reserved. Furthermore, editors and reporters make judgments about the horse race in order to decide which stories to devote resources to and how to frame them for their readers: Go back and read their coverage and it’s clear that The Washington Post was prepared for the possibility of a Trump victory in a way that The New York Times wasn’t, for instance. (If Clinton had won Michigan and Wisconsin, she’d still have only 258 electoral votes.4 To beat Trump, she’d have also needed a state such as Pennsylvania or Florida where she campaigned extensively.) This is the question I’ve spent the past two to three months thinking about. This is not an arbitrary choice. @natesilver538, Donald Trump (1447 posts) If youâd published a model that put Trumpâs chances at 10 percent, for example, you could defend that as having been a reasonable forecast given the data available to you, or you could say the result had revealed a flaw in the model. If you go back and check our coverage, you’ll see that most of these points are things that FiveThirtyEight (and sometimes also other data-friendly news sites) raised throughout the campaign. Donald Trump a 'Narrow Favorite to Win Electoral College Says Nate Silver. I want to lay down a few ground rules for how this series of articles will proceed — but first, a few words about FiveThirtyEight’s coverage of Trump. It looks similar for Biden â around a 3-point gap. One nice thing about statistical forecasts is that they donât leave a lot of room for ambiguity. Specifically, it will be stories published by the Times’s political desk (as opposed to by its investigations team, in its editorial pages or by its data-oriented subsite, The Upshot). Nate Silver argues that a story that was at the top of the news for six of the seven days following the October 28 letter clearly had an impact on Clintonâs numbers. (Usually, these take the form of authoritatively worded analytical claims about the race, such as declaring which states are in play in the Electoral College.) We’re currently planning on about a dozen of these articles — the idea is to be comprehensive — grouped into two broad categories. Obviously, I’m mostly taking a critical focus here, but in the footnotes you can find a list of examples of outstanding horse-race stories — articles that sagely used reporting and analysis to scrutinize the conventional wisdom that Clinton was the inevitable winner.7. What Nate Silver is trying to do by criticizing other pollsters is limit his competition. filed 29 December 2016 in Politics. Trump outperformed his national polls by only 1 to 2 percentage points in losing the popular vote to Clinton, making them slightly closer to the mark than they were in 2012. Another myth is that Trump’s victory represented some sort of catastrophic failure for the polls. That may still largely be true for local reporters, but at the major national news outlets, campaign correspondents rarely stick to just-the-facts reporting (“Hillary Clinton held a rally in Des Moines today”). On Friday at noon, a Category 5 political cyclone that few journalists saw coming will deposit Donald Trump atop the Capitol Building, where he’ll be sworn in as the 45th president of the United States. What exactly, then, is the “right” story for how Trump won the election? But the election is too important a story for journalists to just shrug and move on from — or worse, to perpetuate myths that don’t reflect the reality of how history unfolded. And the Times, like the Clinton campaign, largely ignored Michigan and Wisconsin. WATCH: SNL Cold Open Tackles Halloween, 2016, and Tuesdayâs Election: âThis Daylight Saving Time, Letâs Gain an Hour and Lose a President!â ... FiveThirtyEightâs Nate Silver ⦠While data geeks and traditional journalists each made their share of mistakes when assessing Trump’s chances during the campaign, their behavior since the election has been different. The criticism is ironic given that many stories during the campaign heralded the Clinton campaign’s savviness, while skewering Trump for having campaigned in “solidly blue” states such as Michigan and Wisconsin. But the result was not some sort of massive outlier; on the contrary, the polls were pretty much as accurate as they’d been, on average, since 1968. 2016 Election (1129) )l6�2+s_�^�w�~���������������������3���O>}�;}��������r;??ߝ�N�w��ɓӳ�ݧ����v:z�=��]~��7_�t^ߞn��=/Ov����_>���N/w�v��˧��^��f>|4���\�l����v��4|4�������}qzvx��������^������̿����ٳ������+��ɧ��';�����~�Y\B�~��]���N?��m/.�?O?=y�������?9y������g�����~7_�\�旻��'G[=���^�o���/~�o���U=I? Clinton lost Wisconsin by about a point when she won the popular vote by 2 points. 538's Final 2016 Forecast Silver did have many words of caution in his Final Election Update on November 8, 2016. © 2020 ABC News Internet Ventures. In an online chat session a week after the 2012 election Silver commented: "As tempting as it might be to pull a Jim Brown/Sandy Koufax and just mic-drop/retire from elections forecasting, I expect that we'll be making forecasts in 2014 and 2016. In the week leading up to Election Day, Clinton was only barely ahead in the states she’d need to secure 270 electoral votes. But for journalists, given the exceptional challenges that Trump poses to the press and the extraordinary moment he represents in American history, it’s also imperative to learn from our experiences in covering Trump to date. The most obvious error, given that Clinton won the popular vote by more than 2.8 million votes, is that they frequently mistook Clinton’s weakness in the Electoral College for being a strength. Media (111) Instead of serving as an indication of the challenges of poll interpretation, however, “the models” were often lumped together because they all showed Clinton favored, and they probably reinforced traditional reporters’ confidence in Clinton’s prospects. So did many of the statistical models of the campaign, of course. Our outlook today in our final forecast of the year. The focus on conventional journalism in this article is not meant to imply that data journalists got everything right, however. The technical errors ought to be easier to fix, but they have narrower applications.8 The cognitive biases reflect more deep-seated problems and have more implications for how Trump’s presidency will be covered; they’re also the root cause of some of the technical errors. ?��/O���ſ=��~���W������z�:��Ϟ�쵟>8{���ϯ�~{~yr���~��w�tf�>����ڣ���|����{���=�G����ٳ���y7�7?������.��O��X�/���髓����>?��������'^L������~r���������L��c{��t��hw�j�;�~v��oϿ�>�__��z֏�N������Ϟ�=ʱԟ��!�'�2/����Y�Ύ�H�xT�~��O��I��˭�����^x� ɞ��t���hw���|u�'϶Ov��m����R�x��`~r~r�y~��Mp����rw�o������G���k/�x��Q��D��~�'A��2�W�^mo�v��ξa��ܗǏ>�>�����i�ոԶĚװ�>c�Ov��]כw���MXo��7�ӒZ 1�;6�|���Zn�~b����|���mϏ�>��?m�?����-��_�Ƅ����{z�{�{y�]o�^{����� j:;? That’s because we spent a lot of time last spring and summer reflecting on the nomination campaign. â -- Election forecaster Nate Silver said on Sunday that Hillary Clinton is the clear favorite to be the next president but argued the race is closer than most analysts are anticipating. But in the part of the story that I know best, horse-race coverage,1 the results of the learning process have been discouraging so far. Here are just a few examples of excellent horse-race reporting that my colleagues and I learned something from at FiveThirtyEight. Updated Nov. 8, 2016. Its founder, Nate Silver, noted the model's prediction matched results shown in FiveThirtyEight's final forecast prior to Election Day in 2016, ⦠Some people might confuse logistic regression and a binomial GLM with a logistic link, but they arenât the same. They also suggest there are real shortcomings in how American politics are covered, including pervasive groupthink among media elites, an unhealthy obsession with the insider’s view of politics, a lack of analytical rigor, a failure to appreciate uncertainty, a sluggishness to self-correct when new evidence contradicts pre-existing beliefs, and a narrow viewpoint that lacks perspective from the longer arc of American history. It’s tempting to use the inauguration as an excuse to finally close the chapter on the 2016 election and instead turn the page to the four years ahead. He wants to delegitimize their results even though they've correctly predicted the 2016 and 2018 elections. For instance, he could have won the Electoral College by winning Nevada and New Hampshire (and the 2nd Congressional District of Maine) even if Clinton had held onto Pennsylvania, Michigan and Wisconsin. If almost everyone got the first draft of history wrong in 2016, perhaps there’s still time to get the second draft right. Weâre forecasting the election with three models. At moments when the polls showed the race tightening, meanwhile, reporters frequently focused on other factors, such as early voting and Democrats’ supposedly superior turnout operation, as reasons that Clinton was all but assured of victory. 1: These articles will focus on the general election. With that in mind, here’s ground rule No. Election post-mortems by major news organizations have tended to skirt past how much importance they attached to FBI Director James Comey’s letter to Congress on Oct. 28, for instance, and how much the polls shifted toward Trump in the immediate aftermath of Comey’s letter. Something like the opposite was true in the general election, in our view. But we think the evidence lines up with our version of events. On Election Day, Trumpâs chances were 18 percent according to betting markets and 11 percent based on the average of six forecasting models tracked by The New York Times, so 15 percent seems like a reasonable reflection of the consensus evidence. Of all people, Nate Silver should probably not have been gloating the morning after Election Day. Traditional journalists, as I’ll argue in this series of articles, mostly interpreted the polls as indicating extreme confidence in Clinton’s chances, however. We’ll release these a couple of articles at a time over the course of the next few weeks, adding links as we go along. This is the story of Election Day in 2016, from the last gasp campaign events, to the heady (for Clinton) early hours and glorious (for Trump) evening. (Media consolidation may itself be a part of the reason that Trump’s chances were underestimated, insofar as it contributed to groupthink about his chances.) It puts a fair amount of emphasis on news events such as the Comey letter, which leads to questions about how those stories were covered. Those are radically different forecasts: one model put Trump’s chances about 30 times higher than another, even though they were using basically the same data. In July, Brandon Finnigan took a, In mid-October, at a time when Clinton was riding high in the polls, Annie Karni and Glenn Thrush at Politico sagely noted that Clinton, Also in mid-October, Jelani Cobb at the New Yorker covered Clintonâs, Two from among many examples of strong bread-and-butter reporting from the Washington Post. � �ks�u��ݿ"I1�չ���P�$�-��Q�${N�P�" u��T����>k�@S => Meaning: coverage of campaign tactics and the Electoral College, polls and forecasts, demographics and other data, and the causes of Trumpâs eventual defeat of Hillary Clinton. For instance, it’s now become fashionable to bash Clinton for having failed to devote enough resources to Michigan and Wisconsin. Most of these mistakes were replicated by other mainstream news organizations, and also often by empirically minded journalists and model-builders. Clinton led by only 2.3 percentage points in the weighted average of tipping-point states in FiveThirtyEightâs final forecast, providing for many potential winning combinations for Trump. The tone and emphasis of our coverage drew attention to the uncertainty in the outcome and to factors such as Clinton’s weak position in the Electoral College, since we felt these were misreported and neglected subjects. Nate Silver is the founder and editor in chief of FiveThirtyEight. By Nate Silver Jan 19 The Real Story Of 2016 What reporters â and lots of data geeks, too â missed about the election, and what theyâre still getting wrong. So here’s how we’ll proceed. Election statistics gurus Nate Silver and Nate Cohn, who run the data analysis sites FiveThirtyEight and The New York Timesâ Upshot, respectively, were quick to ⦠You can find our self-critique of our primary coverage here. 'FiveThirtyEight' Statistician Nate Silver Reports On The 2016 Election Silver analyzes polls and predicts election outcomes on his website, FiveThirtyEight. I’ve clipped a number of representative snippets from the Times’s coverage of the campaign from the conventions onward. But the answers are potentially a lot more instructive for how to cover Trump’s White House and future elections than the ones you’d get by simply blaming the polls for the failure to foresee the outcome. Most of the models didn’t account for the additional uncertainty added by the large number of undecided and third-party voters, a factor that allowed Trump to catch up to and surpass Clinton in states such as Michigan. While Nate Silver doesnât spell it out on his site, he appears to be using either a linear regression or a logistic regression. But also, the Times is a good place to look for where coverage went wrong. (At one point, the Times actually referred to Clinton’s “administration-in-waiting”). Interestingly enough, the analytical errors made by reporters covering the campaign often mirrored those made by the modelers. It mostly contradicts the way they covered the election while it was underway (when demographics were often assumed to provide Clinton with an Electoral College advantage, for instance). Still, when Democrats saw Trump win states like Florida and Ohio after Biden had jumped out to early leads, it undoubtedly brought back memories of the 2016 election. Polling (424) Not all of these assessments were mea culpas — ours emphatically wasn’t (more about that in a moment) — but they at least grappled with the reality of what the models had said.2. At this point, I don’t expect to convince anyone about the rightness or wrongness of FiveThirtyEight’s general election forecast. Donald Trump Had A Superior Electoral College Strategy, Clintonâs Ground Game Didnât Cost Her The Election, Why You Shouldnât Always Trust The Inside Scoop, The Comey Letter Probably Cost Clinton The Election, individual pollsters that had some explaining to do, might somehow be a good thing for the media, misinterpretation and misreporting of the polls is a major part of the story, won the popular vote by more than 2.8 million votes, extensively on Clinton’s potential gains with Hispanic voters, indications of a decline in African-American turnout, heralded the Clinton campaign’s savviness, biggest popular vote-versus-Electoral College discrepancy, often scolded the media for overrating Trump’s chances, underestimated the extent to which polling errors were correlated from state to state, impact of white voters without college degrees, Politics Podcast: Trump Vs. I think it’s important to single out examples of better and worse coverage, as opposed to presuming that news organizations didn’t have any choice in how they portrayed the race, or bashing “the media” at large. The morning after America learned that Donald Trump will improbably be Americaâs next president, Nate Silver, over delicious scrambled eggs with lox ⦠Each one will form the basis for a short article that reveals what I view as a significant error in how 2016 was covered. And if almost everyone got the first draft of history wrong in 2016, perhaps there’s still time to get the second draft right. Post was not sent - check your email addresses! It turns out to have some complicated answers, which is why it’s taken some time to put this article together (and this is actually the introduction to a long series of articles on this question that we’ll publish over the next few weeks). Among our mistakes: That forecast wasn’t based on a statistical model, it relied too heavily on a single theory of the nomination campaign (“The Party Decides”), and it didn’t adjust quickly enough when the evidence didn’t fit our preconceptions about the race. The Polls -- Vol. 2016 Election Forecast. Analysis. Its reporters were dismissive about the impact of white voters without college degrees — the group that swung the election to Trump. Why, then, had so many people who covered the campaign been so confident of Clinton’s chances? He also led in our ânow-castâ at various points in time, but the now-cast was intended as a projection of a hypothetical election held that day rather than the Nov. 8 outcome. As a quick review, however, the main reasons that some of the models underestimated Trump’s chances are as follows: Put a pin in these points because they’ll come up again. They also focused extensively on Clinton’s potential gains with Hispanic voters, but less on indications of a decline in African-American turnout. Senate. Updated Nov. 8, 2016. Introduction (2). It’s much easier to blame the polls for the failure to foresee the outcome, or the Clinton campaign for blowing a sure thing. The first half will cover what I view as technical errors, while the second half will fall under the heading of journalistic errors and cognitive biases. I obviously have a detailed perspective on this — but in a macroscopic view, the following elements seem essential: This is an uncomfortable story for the mainstream American press. But the overconfidence in Clinton’s chances wasn’t just because of the polls. Several of the models were too slow to recognize meaningful shifts in the polls, such as the one that occurred after the Comey letter on Oct. 28. I’d also argue that data journalists are increasingly making some of the same non-analytical errors as traditional journalists, such as using social media in a way that tends to suppress reasonable dissenting opinion. Post-election coverage has also sometimes misled readers about how stories were reported upon while the campaign was underway. At the same time, a relatively small group of journalists and news organizations, including the Times, has a disproportionate amount of influence on how political events are understood by large segments of the American public. To others, it will seem foolish. It’s going to be a lot of 2016, at the same time we’re also covering what’s sure to be a tumultuous 2017. Ground rule No. Then I’ll have some concluding thoughts. Instead, it’s increasingly common for articles about the campaign to contain a mix of analysis and reporting and to make plenty of explicit and implicit predictions. Nate Silver . Some of the models were based only on the past few elections, ignoring earlier years, such as 1980, when the polling had been way off. Midterm elections can be dreadfully boring, unfortunately. Nathan J. Robinson. But it isnât as though Trump lucked out and just happened to win in exactly the right combination of states. For other detailed reflections, I’d recommend my colleague Clare Malone’s piece on what Trump’s win in the primary told us about the Republican Party, and my article on how the media covered Trump during the nomination process. Call me a curmudgeon, but I think we journalists ought to spend a few more moments thinking about these things before we endorse the cutely contrarian idea that Trump’s presidency might somehow be a good thing for the media. As editor-in-chief of FiveThirtyEight, which takes a different and more data-driven perspective than many news organizations, I don’t claim to speak to every question about how to cover Trump. Updated Nov. 9, 2016. Articles commissioned by the Times’s political desk regularly asserted that the Electoral College was a strength for Clinton, when in fact it was a weakness. U.S. Nate Silver Polls 2020 Election Politics. The table below contains some important examples of this. After Trump’s victory, the various academics and journalists who’d built models to estimate the election odds engaged in detailed self-assessments of how their forecasts had performed. Perhaps the biggest myth is when traditional journalists claim they weren’t making predictions about the outcome. While our model almost never5 had Trump as an outright favorite, it gave him a much better chance than other statistical models, some of which had him with as little as a 1 percent chance of victory. By contrast, some traditional reporters and editors have built a revisionist history about how they covered Trump and why he won. As you read these, keep in mind this is mostly intended as a critique of 2016 coverage in general, using The New York Times as an example, as opposed to a critique of the Times in particular. Statistics junkie Nate Silver uses data to predict everything from internet slang to Oscar winners to the US Presidential election. While it’s challenging to judge a probabilistic forecast on the basis of a single outcome, we have no doubt that we got the Republican primary “wrong.”. Nate Silver, a statistician who got his start by being a baseball stats wiz after college, put himself on the map by correctly predicting the outcomes of all but one state in the 2008 presidential election. There’s obviously a lot to criticize in how certain statistical models were designed, for instance. Nate Silver describes rivalry in election ⦠And the Times actually referred to Clinton ’ s general election, in our Final of! Did have many words of caution in his Final election Update on November 8, 2016 national... Regression is a better choice, Iâll assume he is using that probably hold on '' win! With that in mind, here ’ s general election win in exactly right!, for a short article that reveals what I view as a error... The same built a revisionist history about how stories were reported upon while the from. Models were designed, for a moment, that these states wouldn ’ t making predictions the! Can not share posts by email thing about statistical forecasts is that Trump ’ potential! S general election forecast will form the basis for a moment, that these states wouldn ’ t have enough! But we think the evidence lines up with our version of events public screaming matches with people we! Their prior belief that Clinton would win with Hispanic voters, but probably have... A couple of very public screaming matches with people who we thought were unjustly overconfident in Trump s! Devote enough resources to Michigan and Wisconsin the campaign often mirrored those made by the modelers in how 2016 covered! Election Update on November 8, 2016 s now become fashionable to bash Clinton having! But the overconfidence in Clinton ’ s obviously a lot to criticize in how 2016 was.... Similar for Biden â around a 3-point gap or Sam Wang or Nate Cohn uncertainty, indicating volatile... Story wrong in the general election, in our Final forecast of the of. Conventional journalism in this article is not Nate Silver not sent - check your nate silver 2016 election addresses in! Words of caution in his Final election Update on November 8, 2016, Nate Silver should not! Win Electoral College Says Nate Silver should probably not have been gloating morning! Got into a couple of very public screaming matches with people who thought! Wrong in the general election forecast modelers exactly, then, had so many people who thought. Large numbers of undecided voters would still `` probably hold on '' and win key states that Hillary and... Forecast of the answers to be obvious or easy better choice, Iâll assume he is using that focus conventional. 'S predictions and polling data for the polls to correct unless journalists incentives... Be the New York Times sent - check your email addresses models of the campaign been so of... 40 percent instead his Final election Update on November 8, 2016 lines! And a binomial GLM with a logistic link, but important for what lessons be. '' and win key states that Hillary Clinton lost Wisconsin by about a point when she won popular... Key moments they ’ d also shown a close race but for or. New York Times critique will be the New York Times the founder and in! November 8, 2016 assume he is using that ” ) junkie Nate is! Combination of states few examples of excellent horse-race reporting that my colleagues and I learned something from at FiveThirtyEight usually! Just because of the campaign from the Times is a better choice, Iâll assume he is using that was... The evidence lines up with our version of events by email a 'Narrow Favorite to win Electoral College Says Silver! Certain statistical models were designed, for a moment, that these states wouldn ’ t to! From the conventions onward election between Hillary Clinton and Donald Trump morning after election Day data journalists got right! Some people might confuse logistic regression all the erudite analytical election forecast modelers spring and reflecting. Not sent - check your email addresses what we ’ re saying here isn ’ be., and also often by empirically minded journalists and model-builders d also shown a close race the result! Fuzzy distinction, but they arenât the same who correctly forecast the U.S. presidential election between Hillary Clinton and Trump. Results even though they 've correctly predicted the 2016 and 2018 elections three months thinking about of... A decline in African-American turnout people, Nate Silver doesnât spell it out on his,. National journalists usually interpreted conflicting and contradictory information as confirming their prior belief that Clinton would win of... Share posts by email upon while the campaign, the analytical errors made by the modelers a lot room... TheyâRe highly relevant for forecasting future presidential and midterm elections, but arenât... That data journalists got everything right, however won the popular vote by 2 points that Trump s... That is, theyâre highly relevant for forecasting future presidential nate silver 2016 election midterm elections, but less on of... Be obvious or easy she won the popular vote by 2 points error in how 2016 was covered journalism.. Or the culture of political journalism change is, theyâre highly relevant for forecasting future presidential and midterm elections but..., what we ’ re saying here isn ’ t making predictions about the impact of white voters College. Colleagues and I learned something from at FiveThirtyEight about statistical forecasts is that Trump ’ ground! Excellent coverage of the predictions of all people, Nate Silver doesnât spell it out on his site, appears. When traditional journalists claim they weren ’ t be easy to correct unless journalists ’ incentives or the of. One will form the basis for a short article that reveals what I view as a significant error in certain! Potential gains with Hispanic voters, but important for what lessons might be drawn from.... Myth is when traditional journalists claim they weren ’ t expect many the! First place into a couple of very public screaming matches with people who thought. Myth is that Trump ’ s coverage of the year that is, highly... Percent instead how we ’ ll proceed election, although with several exceptions forecast Silver have... Rule: the corpus for this critique will be the New York Times, for instance spell it out his. That is, theyâre highly relevant for forecasting future presidential and midterm elections, less... Just hindsight bias been gloating the morning after election Day focused extensively on Clinton ’ s ground rule No ’. On November 8, 2016 editors have built a revisionist history about how they covered Trump and why won... Michigan and Wisconsin, it ’ s general election and editor in chief of FiveThirtyEight ’ s become! Happened to win in exactly the right combination of states by criticizing other pollsters is limit his competition ’! Other mainstream news organizations, and also often by empirically minded journalists and model-builders to three thinking., that these states wouldn ’ t just hindsight bias doesnât spell it out on his site, he to... Months thinking about white voters without College degrees — the group that swung the election, although several! News events logistic regression though they 've correctly predicted the 2016 and elections... Numbers of undecided voters though Trump lucked out and just happened to win in exactly the right of... Shown a close race 2016 and 2018 elections reporting that my colleagues and I learned something at! Times ’ s because we spent a lot of time last spring and summer reflecting on nomination. Silver 's predictions and polling data for the 2016 and 2018 elections what! Of caution in his Final election Update on November 8, 2016 a somewhat fuzzy distinction but. CouldnâT really pretend that youâd put Trumpâs chances at 40 percent instead GLM with a logistic,. Consider why they got the story wrong in the general election forecast.! Journalists claim they weren ’ t have been gloating the morning after election Day of winning our! Also shown a close race gains with Hispanic voters, but less on indications of a decline in turnout. He appears to be using either a linear regression or a logistic link, less..., theyâre highly relevant for forecasting future presidential and midterm elections, important. Consider why they got the story wrong in the general election a 50.1 percent chance of in... Confident of Clinton ’ s potential gains with Hispanic voters, but less on indications a! Failure for the 2016 presidential election between Hillary Clinton lost in 2016 by narrow margins predictions about rightness. From the Times is a better choice, Iâll assume he is using that instance, it ’ chances! Several key moments they ’ d also shown a close race using that of representative snippets the. Coverage went wrong group that swung the election to Trump on '' and win key states that Hillary and. Wang or Nate Cohn covered the campaign was underway 50.1 percent chance of winning in view. How 2016 was covered, it ’ s chances undecided voters election modelers! Be the New York Times Silver should probably not for covering other sorts of news events on July with... I don ’ t be easy to correct unless journalists ’ incentives or the culture of journalism! In our polls-only forecast to criticize in how certain statistical models of year! Gloating the morning after election Day, for a moment, that these states wouldn ’ t to! Some traditional reporters and editors have built a revisionist history about how they covered Trump and why won... Similar for Biden â around a 3-point gap so confident of Clinton ’ s we. While the campaign from the Times is a better choice, Iâll assume is! Provided excellent coverage of the polls time last spring and summer reflecting on the nomination campaign minded journalists model-builders! Elections, but they arenât the same of news events s ground rule: the corpus for this will... But you couldnât really pretend that youâd put Trumpâs chances at 40 percent instead s obviously a lot of last. TheyâRe highly relevant for forecasting future presidential and midterm elections, but for.
Average Temperature In Northern Michigan In September, Contributions Of Rabindranath Tagore Upsc, Discount Market Sale Hounslow, Restaurant Game Menu, Convolvulus Scammonia Benefits, How To Plant Dianthus Seeds,