Logically their views as the changed, or not, they were a genuine reflection of the thinking of the voting electorate. One media source dismisses this as 'a lucky chance that all the demographics accidentally fell into place' which, given the large sample of voters, is just silly and another attempt to explain away the media and polling failure.
The graph below sets out the Trump (red) versus Clinton (blue) preferences from early September, when people are starting to take the election seriously. As is obvious, except for a brief period after the release of the ridiculous Bush/Trump "sex tape" Trump led for almost the entire time i.e he was always going to win and all the media beat-ups, distortions, fake polls, outright lies etc. had no lasting effect on Trump's eventual win.
That USC overstated Trump's popular vote has no bearing on their performance. They, and the IBD, PPD polls also gave Trump a popular vote lead with IBD at 1% and PPD at less than 1% being closer to the final result. But all three were within the margin of error and it is their positive Trump performance over time that is the real indicator of success.
Further the Democratic super majority in California is an automatic distortion of popular vote totals nationwide, Especially so as the Republican vote is obviously depressed as there are basically no down ticket Republicans to vote for at the senate level or competitive congressional races and usually the presidential race is settled before the California polling places close
In the end and quite simply the voters wanted change for a multitude of individual and collective reasons and were determined to get change no matter the distractions. The honest USC Poll picked that up and was correct as they had been, using the same methodology in 2012.
The "one off" MSM polls proved to be grossly distorted with their over weighting of Democrats or blatantly incompetent or even dishonest. It is unlikely they will be taken seriously again.
As James E. Campbell sets out the aggregate of professional forecasters predictions from September were almost exactly what the final popular vote result was. The October-November election coverage was "a tale told by an idiot, full of sound and fury signifying nothing"
With the dust settling from one of the most brutal and nasty presidential campaigns in modern American history and with the late vote returns creeping up to a final count, it is time to take stock of the presidential election forecasts offered initially to readers of the Crystal Ball website and then published in the October issue of PS: Political Science and Politics. Despite the surprising electoral vote victory of Donald Trump, the vote count as of one week after the election indicates that Democratic nominee Hillary Clinton received 50.5% of the two-party popular vote cast nationwide to Republican President-elect (yes, it is still jolting) Trump’s 49.5%.
So how did the forecasts do? From late June to early September in Sabato’s Crystal Ball, eight forecasters or teams of forecasters issued 10 presidential election forecasts of the national two-party popular vote (along with the PollyVote meta-forecast assembled from array of different types of forecasts). Aside from a few minor updates, these were the same forecasts later published in PS (in no case did the difference between the Crystal Ball and PS reported forecast differ by more than two-tenths of a percentage point). Table 1 reports the forecasts from the closest to the actual vote division as it appears at this time to the forecast with the largest absolute error.
Table 1: Political science forecasts of the 2016 presidential election
Notes: *As of noon on Nov. 16, 2016, the two-party vote for Hillary Clinton was 50.5% (with 130.5 million total votes reported) as calculated from data made available from official sources gathered by David Wasserman. **A preliminary forecast from Lewis-Beck and Tien reported in mid-August was 51.1%. Their final and “official” forecast published in PS and presented at the American Political Science Association meeting is used here.
In an election with plenty of ups and downs in the polls and more than its share of controversies, from the revelation of a salacious old audio tape of Donald Trump to an off-again, on-again FBI probe of Hillary Clinton, and with non-academic daily-changing “forecasts” bouncing around erratically, the political science presidential forecasts generally fared quite well and several were extremely accurate. Five of the 10 forecasts were within one percentage point of the actual vote. These include forecasts by Lockerbie, the Jeromes, Lewis-Beck and Tien as well as the forecasts from my two models. Three of these forecasts missed the actual vote by less than half of a percentage point. Another three of the forecasts (Abramowitz and the two entries by Erikson and Wlezien) were within two points of the vote. Holbrook’s forecast was two points off the vote. Norpoth’s forecast of a Trump popular vote majority had the largest vote percentage error, though it was made in early March, more than 35 weeks before the election, and was still within three points of the actual vote.