Présidentielle américaine/2012: Mais qui a encore besoin d’électeurs quand on a Nate Silver? (Did Voter of the year Nate Silver help Obama’s reelection?)

Soudain, Norman se sentit fier. Tout s’imposait à lui, avec force. Il était fier. Dans ce monde imparfait, les citoyens souverains de la première et de la plus grande Démocratie Electronique avaient, par l’intermédiaire de Norman Muller (par lui), exercé une fois de plus leur libre et inaliénable droit de vote. Le Votant (Isaac Asimov, 1955)
Le fait même de poser une question peut inventer un résultat car elle fait appel à l’imaginaire du sondé qui n’y avait pas encore réfléchi. Alain Garrigou
D’après les journaux, les sondages montrent que la plupart des gens croient les journaux qui déclarent que la plupart des gens croient les sondages qui montrent que la plupart des gens ont lu les journaux qui conviennent que les sondages montrent qu’il va gagner. Mark Steyn
Le premier ordinateur est livré à l’United States Census Bureau le 30 mars 1951 et mis en service le 14 juin. Le cinquième (construit pour l’Atomic Energy Commission) a été utilisé par CBS pour prédire l’issue de l’élection présidentielle de 1952 (alors que les sondages réalisés « humainement » donnaient Eisenhower perdant). À partir d’un échantillon d’un pour cent des votants il prédit qu’Eisenhower aurait été élu président, chose que personne n’aurait pu croire, mais UNIVAC avait vu juste. Wikipedia
UNIVAC I came to the public’s attention in 1952, when CBS used one to predict the outcome of the presidential election. The computer correctly predicted the Eisenhower victory, but CBS did not release that information until after the election because the race was thought to be close. CNN
What accounts for the persistent and often wide ranging divergence between polls? The most common answer is that there are fundamental variations in the pool of respondents sampled. For example, polls typically target a particular population: adults at large, registered voters, likely voters, actual voters, and all these categories can be infinitely subdivided and, in labyrinthine ways, overlap. Further muddying already turbid waters, each one of these populations tends to be more or less Republican or Democrat so every poll relies upon some algorithmic method to account for these variations and extrapolate results calibrated in light of them. These methods are themselves borne out of a multiplicity of veiled political assumptions driving the purportedly objective analysis in one direction or another, potentially tincturing the purity of mathematical data with ideological agenda. Math doesn’t lie but those who make decisions about what to count and how to count it surely do. Another problem is that voter self-identification, a crucial ingredient in any poll, is both fluid and deceptive. Consider that while approximately 35% of all voters classify themselves as “independents”, only 10% of these actually have no party affiliation. In other words, in any given year, voters registered with a certain party might be inspired to vote independently or even switch sides without surrendering their party membership. These episodic fits of quasi-independence can create the illusion that there are grand tectonic shifts in the ideological makeup of the voting public. It’s worth noting that the vast majority of so-called independents pretty reliably vote with their party of registration. The problem of self-identification is symptomatic of the larger difficulty that polling, for all its mathematical pretensions, depends on the human formulation of questions to be interpreted and then answered by other human beings. Just as the questions posed can be loaded with hidden premises and implicit political judgments, the responses solicited can be more or less honest, clear, and well-considered. It seems methodologically cheap to proudly claim scientific exactitude after counting the yeas and nays generated by the hidden complexity of these exchanges. Measuring what are basically anecdotal reports with number doesn’t magically transform a species of hearsay into irrefragable evidence any more than it would my mother’s homespun grapevine of gossip. The ambiguous contours of human language resist the charms of arithmetic. The ultimate value of any polling is always a matter to be contextually determined, especially in light of our peculiar electoral college which isolates the impact of a voting population within its state. So the oft cited fact that 35% of voters consider themselves independent might seem like a count of great magnitude but most of those reside in states, like California and New York, whose distribution of its electoral college votes is a foregone conclusion. When true independent voters in actual swing states are specifically considered, then only 3-5% of the voting population is, in any meaningful sense, genuinely undecided. Despite their incessant production, it is far from clear how informative we can consider polls that generally track the popular vote since, in and of itself, the popular vote decides nothing. Ivan Kenneally

Attention: un bruit peut en cacher un autre !

Mais qui parlera de l’influence médiatique et donc proprement électorale de nos Nate Silver?

Alors qu’au lendemain de la relativement courte réélection du Père Noël de Chicago, où, entre la désaffection apparemment inattendue d’une partie d’électeurs républicains et d’hispaniques et sans compter la « surprise d’octobre » de l’ouragan Sandy, les Américains ont « une fois de plus exercé leur libre et inaliénable droit de vote », la planète progressiste se félicite de la leçon que viennent d’asséner aux sondeurs et stratèges du GOP les ordinateurs du petit génie de la statistique Nate Silver et son blog du NYT (comme d’ailleurs ceux de Sam Wang ou d’Intrade) …

Comment ne pas repenser (merci Dr Goulu) à cette nouvelle de politique-fiction de 1955 d’Isaac Asimov (« Franchise », « droit de vote » mais traduit par « Le Votant » en français) sur la « démocratie électronique » dans laquelle les États-Unis de 2008 (première année du premier succès de Nate!) se sont déchargés du devoir électoral sur un ordinateur géant (MULTIVAC) permettant de réduire toute la consultation électorale au questionnaire d’un seul électeur, simple employé de magasin de son état?

Mais aussi à l’histoire réelle qui l’avait inspirée, à savoir la prédiction il y a exactement 60 ans par le premier superordinateur (UNIVAC I) qu’avait livré la firme Remington Rand au Bureau du recensement américain et qui, à partir d’un échantillon d’un pour cent de la population et contre les sondages humains, avait prédit pour CBS le succès du républicain Eisenhower contre le démocrate Stevenson?

Information que CBS avait d’ailleurs, contrairement au NYT de 2012, gardé cachée pour ne pas interférer dans une élection elle aussi annoncée très serrée ?

5 leçons scientifiques du succès de Nate Silver

Tom Roud

Café sciences

Le 07/11/2012

La communauté scientifico-geek s’est trouvée un nouveau héros au cours de cette élection présidentielle américaine: Nate Silver, l’auteur du formidable blog 538, qui, à l’heure où je vous parle, a fait un sans faute au niveau de la prédiction des résultats état par état (la Floride restant indéterminée, ce qu’il avait d’ailleurs aussi prévu).

On peut tirer 5 leçons de ce succès de Silver:

ce n’est pas la première fois que Silver réussit à prédire le résultat d’une élection présidentielle état par état. C’est en réalité la seconde fois après 2008. On dit parfois en science qu’un seul résultat spectaculaire ne vaut rien sans sa confirmation, l’élection de 2012 confirme à mon sens qu’il ne s’agit pas d’un coup de chance, et donc que ses modèles sont capables de correctement capturer une réalité.

pour qu’un modèle marche, il faut se baser sur des données multiples, bonnes et moins bonnes. Dans le cas présent, tous les sondages accumulés. Le modèle de Silver pondère parfaitement tous ces sondages, et surtout permet de nuancer tous les « outliers ». Par exemple, le 18 Octobre, un sondage Gallup très commenté politiquement donnait Romney 7 points devant Obama. Silver a tout de suite dit qu’il s’agissait de bruit (« polls that look like outliers normally prove to be so »). Une approche raisonnée identifie les tendances, là où le commentaire politique se focalise sur le bruit.

Inspiré de http://xkcd.com/904. Oui, je sais, c’est du Comic Sans.

un modèle hyper simple peut pourtant être étonnamment prédictif. Les modèles de Silver reposent sur l’idée que les populations socio-économiquement similaires votent de la même façon. En couplant cette idée avec les données de la démographie et les sondages disponibles, Silver a pu « projeter » les résultats des états même en l’absence de sondage sur ceux-ci. Comme disait quelqu’un sur ma TL ce matin, le modèle tient sur une feuille Excel. Les modèles les plus simples ne sont donc pas les moins efficaces, un principe de parcimonie scientifique souvent absent de nombreuses modélisations (oui, je pense à toi, « systems biology »)

le corollaire, c’est qu’un système complexe est modélisable tant qu’on identifie correctement des « causes premières ». Nul ne peut contester que les déterminants du vote sont multiples, et que la nature humaine est complexe; pourtant, le modèle de Silver prouve qu’ on peut manifestement arriver à comprendre et prédire relativement finement des comportements. Une leçon à retenir à chaque fois qu’on vous dira que nul ne peut modéliser un système complexe multifactoriels (comme au hasard le climat)

enfin, la science, ce sont des prédictions. Silver s’est mouillé (allant jusque parier avec un éditorialiste critiquant son modèle), a été critiqué pour cela y compris dans son propre journal. C’est la grosse différence entre une approche quantitative et le reste: on sort des prédictions, on les valide ou on les réfute, et on améliore ainsi le modèle au cours du temps. Processus totalement inconnu des nombreux éditorialistes.

Grâce soit donc rendue au premier psychohistorien !

Voir aussi:

US elections 2012

The Signal and the Noise by Nate Silver – review

Nate Silver made headlines predicting Obama’s win. Ruth Scurr learns how he did it

Ruth Scurr

The Guardian

9 November 2012

Obama aside, the indubitable hero of the 2012 US presidential election was the statistician and political forecaster Nate Silver. His blog, FiveThirtyEight.com, syndicated by the New York Times since 2010, correctly predicted the results of the election in 50 out of 50 states. When the worldwide media was universally proclaiming the race too close to call and the pundits were deriding mathematical models, FiveThirtyEight.com steadily argued that the odds made clear that Obama would win. On election day, Silver’s final forecast was that Obama had a 90.9% chance of winning.

The Signal and the Noise: The Art and Science of Prediction

Nate Silver

the Guardian

Reflecting on the electoral impact of Hurricane Sandy, Silver was the voice of sanity in the last few days of the race. On 5 November he suggested that « historical memory » might consider Sandy pivotal, but in fact Obama had been rebounding slowly but surely in the polls since his lows in early October. Listing eight alternative explanations for Obama’s gains after the storm hit – including recent encouraging economic news – Silver concluded that the gains were « over-determined »: a lot of variables might have contributed to the one result.

As the votes were counted and the states declared themselves, vindicating the FiveThirtyEight.com predictions in every single case, Silver’s newly published book became an overnight bestseller.

The first thing to note about The Signal and the Noise is that it is modest – not lacking in confidence or pointlessly self-effacing, but calm and honest about the limits to what the author or anyone else can know about what is going to happen next. Across a wide range of subjects about which people make professional predictions – the housing market, the stock market, elections, baseball, the weather, earthquakes, terrorist attacks – Silver argues for a sharper recognition of « the difference between what we know and what we think we know » and recommends a strategy for closing the gap.

Recognition of the gap is not new: there are plenty of political theorists and scientists droning on about it already, in the manner of the automated voice on the tube when train and platform don’t quite meet. Strategies for closing, or at least narrowing, the gap between what we know and what we think we know in specific contexts, are rarer, specialised, and probably pretty hard for anyone outside a small circle of experts to understand.

What Silver has to offer is a lucid explanation of how to think probabilistically. In a promising start, he claims that his model – based on a theorem inspired by Thomas Bayes, the 18th-century English mathematician – has more in common with how soldiers and doctors think than with the cognitive habits of TV pundits. « Much of the most thoughtful work I have found on the use and abuse of statistical models, and on the proper role of prediction, comes from people in the medical profession, » Silver reports. You can quite easily get away with a stupid model if you are a political scientist, but in medicine as in war, « stupid models kill people. It has a sobering effect ».

Silver is not a medical doctor, even if a version of the Hippocratic oath – Primum non nocere (First, do no harm) – is the guiding principle of his probabilistic thinking: « If you can’t make a good prediction, it is very often harmful to pretend that you can. » After graduating from Chicago with a degree in economics in 2000, he worked as a transfer-pricing consultant for the accounting firm KPMG: « The pay was honest and I felt secure, » but he soon became bored. In his spare time, on long flights and in airports, he started compiling spreadsheets of baseball statistics that later became the basis for a predictive system called Pecota.

Silver delivers a candid account of the hits and misses of Pecota, the lessons learned and the system’s limitations: « It’s hard to have an idea that nobody else has thought of. It’s even harder to have a good idea – and when you do, it will soon be duplicated. »

After his interest in baseball peaked, he moved on to predicting electoral politics. The idea for FiveThirtyEight (named after the 538 votes in the electoral college) arrived while Silver was waiting for a delayed flight at New Orleans airport in 2008. Initially, he made predictions about the electoral winners simply by taking an average of the polls after weighting them according to past accuracy. The model gradually became more intricate: his method centres on crunching the data from as many previous examples as possible; imagine a really enormous spreadsheet. He accurately forecast the outcome of 49 out of 50 states in the 2008 presidential election and the winner of all 35 senate races.

Challenged by the economist Justin Wolfers and his star student David Rothschild as to why he continues to make forecasts through FiveThirtyEight despite fierce competition from larger prediction websites such as Intrade (which covers « everything from who will win the Academy Award for Best Picture to the chance of an Israeli air strike on Iran ») Silver replies: « I find making the forecasts intellectually interesting – and they help to produce traffic for my blog. » His unabashed honesty seems the open secret of his success.

Bayes, who lends his name to Silver’s theorem, was « probably born in 1701 – although it might have been 1702 ». Silver is a statistician, not a historian, so he reports the fact of the uncertainty without elaboration. As a Nonconformist, Bayes could not go to Oxford or Cambridge, but was eventually elected a fellow of the Royal Society. His most famous work, « An Essay toward Solving a Problem in the Doctrine of Chances », was published posthumously in 1763. Silver summarises it as: « a statement – expressed both mathematically and philosophically – about how we learn about the universe: that we learn about it through approximation, getting closer and closer to the truth as we gather more evidence. »

The attraction of Bayes’s theorem, as Silver presents it, is that it concerns conditional probability: the probability that a theory or hypothesis is true if some event has happened. He applies the theorem to 9/11. Prior to the first plane striking the twin towers, the initial estimate of how likely it was that terrorists would crash planes into Manhattan skyscrapers is given as 0.005%. After the first plane hit, the revised probability of a terror attack comes out at 38%. Following the second plane hitting the revised estimate that it was a deliberate act jumps to 99.99%. « One accident on a bright sunny day in New York was unlikely enough, but a second one was almost a literal impossibility, as we all horribly deduced. »

Fastidiously aware of the gap between what we know and what we think we know, Silver proceeds wryly to delineate the limits of what he has achieved with this application of Bayes theorem to 9/11: « It’s not that much of an accomplishment, however, to describe history in statistical terms. »

Silver ends by advocating a balance between curiosity and scepticism when it comes to making predictions: « The more eagerly we commit to scrutinising and testing our theories, the more readily we accept that our knowledge of the world is uncertain, the more willingly we acknowledge that perfect prediction is impossible, the less we will live in fear of our failures, and the more freedom we will have to let our minds flow freely. By knowing more about what we don’t know, we may get a few more predictions right. »

More modesty and effort, in other words, would improve the predictive performance of everyone from the TV pundits to the political scientists, and members of the public trying to understand what is likely to happen next. Just do not expect, Silver warns, to fit a decent prediction on a bumper sticker. « Prediction is difficult for us for the same reason that it is so important: it is where objective and subjective reality intersect. » You would probably need to be a stat geek to drive around with that on the back of your car, but it might just fit if the lettering were small.

• Ruth Scurr’s Fatal Purity: Robespierre and the French Revolution is published by Vintage.

 Voir également:

FiveThirtyEight – Nate Silver\’s Political Calculus

Methodology

Our Senate forecasts proceed in seven distinct stages, each of which is described in detail below. For more detail on some of the terms below please see our FiveThirtyEight glossary.

Stage 1. Weighted Polling Average

Polls released into the public domain are collected together and averaged, with the components weighted on three factors:

* Recency. More recent polls receive a higher weight. The formula for discounting older polling is based on an exponential decay formula, with the premium on newness increasing the closer the forecast is made to the election. In addition, when the same polling firm has released multiple polls of a particular race, polls other than its most recent one receive an additional discount. (We do not, however, simply discard an older poll simply because a firm has come out with a newer one in the same race.)

* Sample size. Polls with larger sample sizes receive higher weights. (Note: no sample size can make up for poor methodology. Our model accounts for diminishing returns as sample size increases, especially for less reliable pollsters.)

* Pollster rating. Lastly, each survey is rated based on the past accuracy of “horse race” polls commissioned by the polling firm in elections from 1998 to the present. The procedure for calculating the pollster ratings is described at length here, and the most recent set of pollster ratings can be found here. All else being equal, polling organizations that, like The New York Times, have staff that belong to The American Association for Public Opinion Research (A.A.P.O.R.), or that have committed to the disclosure and transparency standards advanced by the National Council on Public Polls, receive higher ratings, as we have found that membership in one of these organizations is a positive predictor of the accuracy of a firm’s polling on a going-forward basis

The procedure for combining these three factors is modestly complex, and is described in more detail here. But, in general, the weight assigned to a poll is designed to be proportional to the predictive power that it should have in anticipating the results of upcoming elections. Note that it is quite common for a particular survey from a mediocre pollster to receive a higher weight than one from a strong pollster, if its poll happens to be significantly more recent or if it uses a significantly larger sample size.

Certain types of polls are not assigned a weight at all, but are instead dropped from consideration entirely, and not used in FiveThirtyEight’s forecasts nor listed in its polling database. from the firms Strategic Vision and Research 2000, which have been accused – with compelling statistical evidence in each case – of having fabricated some or all of their polling, are excluded. So are interactive (Internet) polls conducted by the firm Zogby, which are associated with by far the worst pollster rating, and which probably should not be considered scientific polls, as their sample consists of volunteers who sign up to take their polls, rather than a randomly-derived sample. (Traditional telephone polls conducted by Zogby are included in the averages, as are Internet polls from firms other than Zogby.)

Polls are also excluded from the Senate model if they are deemed to meet FiveThirtyEight’s definition of being “partisan.” FiveThirtyEight’s definition of a partisan poll is quite narrow, and is limited to polls conducted on behalf of political candidates, campaign committees, political parties, registered PACs, or registered 527 groups. We do not exclude polls simply because the pollster happens to be a Democrat or a Republican, because the pollster has conducted polling for Democratic or Republican candidate in the past, or because the media organization it is polling for is deemed to be liberal or conservative. The designation is based on who the poll was conducted for, and not who conducted it. Note, however, that there are other protections in place (see Stage 2) if a polling firm produces consistently biased results.

Stage 2. Adjusted Polling Average

After the weighted polling average is calculated, it is subject to three additional types of adjustments.

* The trendline adjustment. An estimate of the overall momentum in the national political environment is determined based on a detailed evaluation of trends within generic congressional ballot polling. (The procedure, which was adopted from our Presidential forecasting model, is described at more length here.) The idea behind the adjustment is that, to the extent that out-of-date polls are used at all in the model (because of a lack of more recent polling, for example), we do not simply assume that they reflect the present state of the race. For example, if the Democrats have lost 5 points on the generic ballot since the last time a state was polled, the model assumes, in the absence of other evidence, that they have lost 5 points in that state as well. In practice, the trendline adjustment is designed to be fairly gentle, and so it has relatively little effect unless there has been especially sharp change in the national environment or if the polling in a particular state is especially out-of-date.

* The house effects adjustment. Sometimes, polls from a particular polling firm tend consistently to be more favorable toward one or the other political party. Polls from the firm Rasmussen Reports, for example, have shown results that are about 2 points more favorable to the Republican candidate than average during this election cycle. It is not necessarily correct to equate a house effect with “bias” – there have been certain past elections in which pollsters with large house effects proved to be more accurate than pollsters without them – and systematic differences in polling may result from a whole host of methodological factors unrelated to political bias. This nevertheless may be quite useful to account for: Rasmussen showing a Republican with a 1-point lead in a particular state might be equivalent to a Democratic-leaning pollster showing a 4-point lead for the Democrat in the same state. The procedure for calculating the house effects adjustment is described in more detail here. A key aspect of the house effects adjustment is that a firm is not rewarded by the model simply because it happens to produce more polling than others; the adjustment is calibrated based on what the highest-quality polling firms are saying about the race.

* The likely voter adjustment. Throughout the course of an election year, polls may be conducted among a variety of population samples. Some survey all American adults, some survey only registered voters, and others are based on responses from respondents deemed to be “likely voters,” as determined based on past voting behavior or present voting intentions. Sometimes, there are predictable differences between likely voter and registered voter polls. In 2010, for instance, polls of likely voters are about 4 points more favorable to the Republican candidate, on average, than those of registered voters, perhaps reflecting enthusiasm among Republican voters. And surveys conducted among likely voters are about 7 points more favorable to the Republican than those conducted among all adults, whether registered to vote or not.

By the end of the election cycle, the majority of pollsters employ a likely voter model of some kind. Additionally, there is evidence that likely voter polls are more accurate, especially in Congressional elections. Therefore, polls of registered voters (or adults) are adjusted to be equivalent to likely voter polls; the magnitude of the adjustment is based on a regression analysis of the differences between registered voter polls and likely voter polls throughout the polling database, holding other factors like the identity of the pollster constant.

Step 3: FiveThirtyEight Regression

In spite of the several steps that we undertake to improve the reliability of the polling data, sometimes there just isn’t very much good polling in a race, or all of the polling may tend to be biased in one direction or another. (As often as not, when one poll winds up on the wrong side of a race, so do most of the others). In addition, we have found that electoral forecasts can be improved when polling is supplemented by other types of information about the candidates and the contest. Therefore, we augment the polling average by using a linear regression analysis that attempts to predict the candidates’ standing according to several non-poll factors:

A state’s Partisan Voting Index

The composition of party identification in the state’s electorate (as determined through Gallup polling)

The sum of individual contributions received by each candidate as of the last F.E.C. reporting period (this variable is omitted if one or both candidates are new to the race and have yet to complete an FEC filing period)

Incumbency status

For incumbent Senators, an average of recent approval and favorability ratings

A variable representing stature, based on the highest elected office that the candidate has held. It takes on the value of 3 for candidates who have been Senators or Governors in the past; 2 for U.S. Representatives, statewide officeholders like Attorneys General, and mayors of cities of at least 300,000 persons; 1 for state senators, state representatives, and other material elected officeholders (like county commissioners or mayors of small cities), and 0 for candidates who have not held a material elected office before.

Variables are dropped from the analysis if they are not statistically significant at the 90 percent confidence threshold.

Step 4: FiveThirtyEight Snapshot

This is the most straightforward step: the adjusted polling average and the regression are combined into a ‘snapshot’ that provides the most comprehensive evaluation of the candidates’ electoral standing at the present time. This is accomplished by treating the regression result as though it were a poll: in fact, it is assigned a poll weight equal to a poll of average quality (typically around 0.60) and re-combined with the other polls of the state.

If there are several good polls in race, the regression result will be just one of many such “polls”, and will have relatively little impact on the forecast. But in cases where there are just one or two polls, it can be more influential. The regression analysis can also be used to provide a crude forecast of races in which there is no polling at all, although with a high margin of error.

Step 5. Election Day projection

It is not necessarily the case, however, that the current standing of the candidates – as captured by the snapshot — represents the most accurate forecast of where they will finish on Election Day. (This is one of the areas in which we’ve done a significant amount of work in transitioning FiveThirtyEight’s forecast model to The Times.) For instance, large polling leads have a systematic tendency to diminish in races with a large number of undecided voters, especially early in an election cycle. A lead of 48 percent to 25 percent with a high number of undecided voters, for example, will more often than not decrease as Election Day approaches. Under other circumstances (such an incumbent who is leading a race in which there are few undecided voters), a candidate’s lead might actually be expected to expand slightly.

Separate equations are used for incumbent and open-seat races, the formula for the former being somewhat more aggressive. There are certain circumstances in which an incumbent might actually be a slight underdog to retain a seat despite of having a narrow polling lead — for instance, if there are a large number of undecided voters — although this tendency can sometimes be overstated.

Implicit in this process is distributing the undecided vote; thus, the combined result for the Democratic and the Republican candidate will usually reflect close to 100 percent of the vote, although a small reservoir is reserved for independent candidates in races where they are on the ballot. In races featuring three or more viable candidates (that is, three candidates with a tangible chance of winning the lection), however, such as the Florida Senate election in 2010, there is little empirical basis on which to make a “creative” vote allocation, and so the undecided voters are simply divided evenly among the three candidates.

Step 6. Error analysis

Just as important as estimating the most likely finish of the two candidates is determining the degree of uncertainty intrinsic to the forecast.

For a variety of reasons, the magnitude of error associated with elections outcomes is higher than what pollsters usually report. For instance, in polls of Senate elections since 1998 conducted in the final three weeks of the campaign, the average error in predicting the margin between the two candidates has been about 5 points, which would translate into a roughly 6-point margin of error. This may be twice as high as the 3- or 4-percent margins of error that pollsters typically report, which reflects only sample variance, but not other ambiguities inherent to polling. Combining polls together may diminish this margin of error, but their errors are sometimes correlated, and they are nevertheless not as accurate as their margins-of-error would imply.

Instead of relying on any sort of theoretical calculation of the margin of error, therefore, we instead model it directly based on the past performance of our forecasting model in Senatorial elections since 1998. Our analysis has found that certain factors are predictably associated with a greater degree of uncertainty. For instance:

The error is higher in races with fewer polls

The error is higher in races where the polls disagree with one another.

The error is higher when there are a larger number of undecided voters.

The error is higher when the margin between the two candidates is lopsided.

The error is higher the further one is from Election Day.

Depending on the mixture of these circumstances, a lead that is quite safe under certain conditions may be quite vulnerable in others. Our goal is simply to model the error explicitly, rather than to take a one-size-fits-all approach.

Step 7. Simulation.

Knowing the mean forecast for the margin between the two candidates, and the standard error associated with it, suffices mathematically to provide a probabilistic assessment of the outcome of any one given race. For instance, a candidate with a 7-point lead, in a race where the standard error on the forecast estimate is 5 points, will win her race 92 percent of the time.

However, this is not the only piece of information that we are interested in. Instead, we might want to know how the results of particular Senate contests are related to one another, in order to determine for example the likelihood of a party gaining a majority, or a supermajority.

Therefore, the error associated with a forecast is decomposed into local and national components by means of a sum-of-squares formula. For Congressional elections, the ‘national’ component of the error is derived from a historical analysis of generic ballot polls: how accurately the generic ballot forecasts election outcomes, and how much the generic ballot changes between Election Day and the period before Election Day. The local component of the error is then assumed to be the residual of the national error from the sum-of-squares formula, i.e.:

The local and national components of the error calculation are then randomly generated (according to a normal distribution) over the course of 100,000 simulation runs. In each simulation run, the degree of national movement is assumed to be the same for all candidates: for instance, all the Republican candidates might receive a 3-point bonus in one simulation, or all the Democrats a 4-point bonus in another. The local error component, meanwhile, is calculated separately for each individual candidate or state. In this way, we avoid the misleading assumption that the results of each election are uncorrelated with one another.

A final step in calculating the error is in randomly assigning a small percentage of the vote to minor-party candidates, which is assumed to follow a gamma distribution.

A separate process is followed where three or more candidates are deemed by FiveThirtyEight to be viable in a particular race, which simulates exchanges of voting preferences between each pairing of candidates. This process is structured such that the margins of error associated with multi-candidate races are assumed to be quite high, as there is evidence that such races are quite volatile.

Voir encore:

50th anniversary of the UNIVAC I

CNN

BLUE BELL, Pennsylvania (CNN) — Fifty years ago — on June 14, 1951 — the U.S. Census Bureau officially put into service what it calls the world’s first commercial computer, known as UNIVAC I.

UNIVAC stands for Universal Automatic Computer. The first model was built by the Eckert-Mauchly Computer Corp., which was purchased by Remington Rand shortly before the UNIVAC went on sale.

Rights to the UNIVAC name are currently held by Unisys.

Unisys spokesmen Guy Isnous and Ron Smith say other early users of UNIVACs included the U.S. Air Force, the U.S. Army, the Atomic Energy Commission, General Electric, Metropolitan Life, US Steel, and DuPont.

The UNIVAC was not the first computer ever built. A host of companies, including Eckert-Mauchly, Remington Rand, IBM, and others, all were developing computers for commercial applications at the same time.

Perhaps the most famous computer of the era was the ENIAC, a computer developed for the U.S. military during World War II. Other computers developed in the 1940s were mostly used by academia.

But the UNIVAC I was the first computer to be widely used for commercial purposes — 46 machines were built, for about $1 million each.

Compared to other computers of the era, the UNIVAC I machines were small — about the size of a one-car garage. Each contained about 5,000 vacuum tubes, all of which had to be easily accessible for replacement because they burned out frequently.

Keeping all those vacuum tubes cool was also a major design challenge. The machines were riddled with pipes that circulated cold water to keep the temperature down.

Each unit was so bulky and needed so much maintenance that some of the companies that bought them never moved them to their own facility, instead leaving them on-site at Remington Rand.

UNIVAC I came to the public’s attention in 1952, when CBS used one to predict the outcome of the presidential election. The computer correctly predicted the Eisenhower victory, but CBS did not release that information until after the election because the race was thought to be close.

Voir enfin:

Polling Opinion: More Sorcery Than Science

Ivan Kenneally

November 5, 2012

At first glance, political opinion polls seems like the nadir of modern liberal democracy. In their special alchemy they congeal a sensitivity to the will of the people and an emphasis on mathematical exactitude. The poll is the culmination of the peculiar modern marriage of science and popular sovereignty, the technocratic and the democratic. To borrow from Hamilton, and by borrow I mean disfigure, the poll is the ultimate success of our “grand experiment in self-governance.”

Of course, on another interpretation, they are completely useless.

As the estimable Jay Cost points out in the Weekly Standard, the polls this year simply don’t seem to add up, collectively defeated by the strident arithmetic that underwrites their purported value. Depending on what pollster you ask, Romney is poised for an explosive landslide of a victory, or about to win a historically close election, or is about to lose decisively, in a fit of humiliation. If you ask Paul Krugman, and I don’t advise that you should unless you’ve been inoculated against shrill, he will call you stupid for suggesting Romney has any chance at victory.

What all these positions have in common is an appeal to the unassailability of mathematics, that last frontier that resists our postmodern inclinations to promiscuously construct and deconstruct the truth like a pile of lego pieces.

What accounts for the persistent and often wide ranging divergence between polls? The most common answer is that there are fundamental variations in the pool of respondents sampled. For example, polls typically target a particular population: adults at large, registered voters, likely voters, actual voters, and all these categories can be infinitely subdivided and, in labyrinthine ways, overlap. Further muddying already turbid waters, each one of these populations tends to be more or less Republican or Democrat so every poll relies upon some algorithmic method to account for these variations and extrapolate results calibrated in light of them. These methods are themselves borne out of a multiplicity of veiled political assumptions driving the purportedly objective analysis in one direction or another, potentially tincturing the purity of mathematical data with ideological agenda. Math doesn’t lie but those who make decisions about what to count and how to count it surely do.

Another problem is that voter self-identification, a crucial ingredient in any poll, is both fluid and deceptive. Consider that while approximately 35% of all voters classify themselves as “independents”, only 10% of these actually have no party affiliation. In other words, in any given year, voters registered with a certain party might be inspired to vote independently or even switch sides without surrendering their party membership. These episodic fits of quasi-independence can create the illusion that there are grand tectonic shifts in the ideological makeup of the voting public. It’s worth noting that the vast majority of so-called independents pretty reliably vote with their party of registration.

The problem of self-identification is symptomatic of the larger difficulty that polling, for all its mathematical pretensions, depends on the human formulation of questions to be interpreted and then answered by other human beings. Just as the questions posed can be loaded with hidden premises and implicit political judgments, the responses solicited can be more or less honest, clear, and well-considered. It seems methodologically cheap to proudly claim scientific exactitude after counting the yeas and nays generated by the hidden complexity of these exchanges. Measuring what are basically anecdotal reports with number doesn’t magically transform a species of hearsay into irrefragable evidence any more than it would my mother’s homespun grapevine of gossip. The ambiguous contours of human language resist the charms of arithmetic.

The ultimate value of any polling is always a matter to be contextually determined, especially in light of our peculiar electoral college which isolates the impact of a voting population within its state. So the oft cited fact that 35% of voters consider themselves independent might seem like a count of great magnitude but most of those reside in states, like California and New York, whose distribution of its electoral college votes is a foregone conclusion. When true independent voters in actual swing states are specifically considered, then only 3-5% of the voting population is, in any meaningful sense, genuinely undecided. Despite their incessant production, it is far from clear how informative we can consider polls that generally track the popular vote since, in and of itself, the popular vote decides nothing.

So the mathematical scaffolding of polls all presume non-mathematical foundations, stated and unstated assumptions, partisan inclinations and non-partisan miscalculations. When the vertiginous maelstrom of numbers fails in its most fundamental task, alighting disorder with order, bringing sense to a wilderness of senselessness, then where can we turn for guidance? I can’t just wait for the results Tuesday night–the modern in my marrow craves not just certainty but prediction, absolute knowledge as prologue. There’s no technocratic frisson in finding anything out after the fact, without the prescience of science, which appeals just as much to our desire to be clever as it does to our craving for knowledge.

I will suggest what no political scientist in America is suggesting: set aside the numbing numbers and the conflicting claims to polling precision and follow me follow Aristotle. We must survey what is available to us in ordinary experience, what we can confirm as a matter of pre-scientific perception, the ancient realism that appealed not to computational models, but the evidence I can see with my own eyes.

What do I see with these eyes? A president running as a challenger, pretending he wasn’t in charge the last four years of blight and disappointment. I see a less than commanding Commander in Chief trying to slither past a gathering scandal that calls into suspicion his character and competence to protect his country. I see a wheezing economy, so infirm our president celebrated a palsied jobs report as evidence of our march to prosperity. I see transparent class warfare that insidiously assumes our embattled middle class resents the rich more than they resent their own shrinking economic opportunity and that women feel flattered and emboldened when condescendingly drawn into a magically conjured cultural war.

I see enthusiastic crowds form around the man they think will deliver them from four years of gruesome ineffectiveness and a defeated left, dispirited and weary, unlikely to convert but less likely to surge. I see ads about Big Bird and and a terror of confronting big issues and a president who seems as bored by his performance as we are. Obama does not look like a winner, not to these eyes.

So in an election year hyper-charged with ideological heat, and polling data potentially varnished by self-fulfilling prophecy and partisan wishful thinking, I tend to rely upon an old school conception of realism: what I can see and what I can modestly infer from what I see. Today, as I write this, I see a Romney victory, however narrowly achieved. This would also be a big victory for the common sense of ordinary political perception over the tortured numbers games that aim to capture it precisely, or to mold it presumptuously.

Advertisements

3 commentaires pour Présidentielle américaine/2012: Mais qui a encore besoin d’électeurs quand on a Nate Silver? (Did Voter of the year Nate Silver help Obama’s reelection?)

  1. […] Déclaration Jacque PARISEAU 1995 « C’est vrai, c’est vrai qu’on a été battus, au fond, par quoi ? Par l’argent puis des votes ethniques, essentiellement. Alors, ça veut dire que, la prochaine fois, au lieu d’être 60 ou 61 % à voter « Oui », on sera 63 ou 64 % et ça suffira. » — Jacques Parizeau (discours du 30 octobre 1995) Comme pour le Québec, les Républicains ont raté leur référendum “par l’argent puis … […]

    J'aime

  2. jcdurbant dit :

    FROM TWO TO TWENTY-FIVE PERCENT (Donald Trump has a 20 percent chance of becoming president, Nate Silver)

    I recently estimated Trump’s chance of becoming the GOP nominee at 2 percent.

    Nate Silver (Aug. 6, 2015)

    http://fivethirtyeight.com/features/donald-trumps-six-stages-of-doom/

    Lately, pundits and punters seem bullish on Donald Trump, whose chances of winning the Republican presidential nomination recently inched above 20 percent for the first time at the betting market Betfair. Perhaps the conventional wisdom assumes that the aftermath of the terrorist attacks in Paris will play into Trump’s hands, or that Republicans really might be in disarray. If so, I can see where the case for Trump is coming from, although I’d still say a 20 percent chance is substantially too high.

    Quite often, however, the Trump’s-really-got-a-chance! case is rooted almost entirely in polls. If nothing Trump has said so far has harmed his standing with Republicans, the argument goes, why should we expect him to fade later on?

    One problem with this is that it’s not enough for Trump to merely avoid fading. Right now, he has 25 to 30 percent of the vote in polls among the roughly 25 percent of Americans who identify as Republican. (That’s something like 6 to 8 percent of the electorate overall, or about the same share of people who think the Apollo moon landings were faked.) As the rest of the field consolidates around him, Trump will need to gain additional support to win the nomination. That might not be easy, since some Trump actions that appeal to a faction of the Republican electorate may alienate the rest of it. Trump’s favorability ratings are middling among Republicans (and awful among the broader electorate).

    Trump will also have to get that 25 or 30 percent to go to the polls. For now, most surveys cover Republican-leaning adults or registered voters, rather than likely voters. Combine that with the poor response rates to polls and the fact that an increasing number of polls use nontraditional sampling methods, and it’s not clear how much overlap there is between the people included in these surveys and the relatively small share of Republicans who will turn up to vote in primaries and caucuses.

    But there’s another, more fundamental problem. That 25 or 30 percent of the vote isn’t really Donald Trump’s for the keeping. In fact, it doesn’t belong to any candidate. If past nomination races are any guide, the vast majority of eventual Republican voters haven’t made up their minds yet.

    It can be easy to forget it if you cover politics for a living, but most people aren’t paying all that much attention to the campaign right now. Certainly, voters are consuming some campaign-related news. Debate ratings are way up, and Google searches for topics related to the primaries1 have been running slightly ahead of where they were at a comparable point of the 2008 campaign, the last time both parties had open races. But most voters have a lot of competing priorities. Developments that can dominate a political news cycle, like Trump’s frenzied 90-minute speech in Iowa earlier this month, may reach only 20 percent or so of Americans …

    https://fivethirtyeight.com/features/dear-media-stop-freaking-out-about-donald-trumps-polls/

    Trump is at only 36 percent in our national polling average, while Clinton is at only 43 percent. Gary Johnson, the Libertarian Party candidate whom our model explicitly includes in the forecast, is polling in the double digits in some polls, while we’re seeing a significant undecided vote and some votes for other candidates, such as Jill Stein of the Green Party. Historically, high numbers of undecided voters contribute to uncertainty and volatility. So do third-party candidates, whose numbers sometimes fade down the stretch run.6 With Clinton at only 43 percent nationally, Trump doesn’t need to take away any of her voters to win. He just needs to consolidate most of the voters who haven’t committed to a candidate yet …

    http://fivethirtyeight.com/features/donald-trump-has-a-20-percent-chance-of-becoming-president

    Trump is one of the most astonishing stories in American political history. If you really expected the Republican front-runner to be bragging about the size of his anatomy in a debate, or to be spending his first week as the presumptive nominee feuding with the Republican speaker of the House and embroiled in a controversy over a tweet about a taco salad, then more power to you. Since relatively few people predicted Trump’s rise, however, I want to think through his nomination while trying to avoid the seduction of hindsight bias. What should we have known about Trump and when should we have known it?

    It’s tempting to make a defense along the following lines:

    Almost nobody expected Trump’s nomination, and there were good reasons to think it was unlikely. Sometimes unlikely events occur, but data journalists shouldn’t be blamed every time an upset happens,2 particularly if they have a track record of getting most things right and doing a good job of quantifying uncertainty.

    We could emphasize that track record; the methods of data journalism have been highly successful at forecasting elections. That includes quite a bit of success this year. The FiveThirtyEight “polls-only” model has correctly predicted the winner in 52 of 57 (91 percent) primaries and caucuses so far in 2016, and our related “polls-plus” model has gone 51-for-57 (89 percent). Furthermore, the forecasts have been well-calibrated, meaning that upsets have occurred about as often as they’re supposed to but not more often.

    But I don’t think this defense is complete — at least if we’re talking about FiveThirtyEight’s Trump forecasts. We didn’t just get unlucky: We made a big mistake, along with a couple of marginal ones.

    The big mistake is a curious one for a website that focuses on statistics. Unlike virtually every other forecast we publish at FiveThirtyEight — including the primary and caucus projections I just mentioned — our early estimates of Trump’s chances weren’t based on a statistical model. Instead, they were what we “subjective odds” — which is to say, educated guesses. In other words, we were basically acting like pundits, but attaching numbers to our estimates.3 And we succumbed to some of the same biases that pundits often suffer, such as not changing our minds quickly enough in the face of new evidence. Without a model as a fortification, we found ourselves rambling around the countryside like all the other pundit-barbarians, randomly setting fire to things.

    There’s a lot more to the story, so I’m going to proceed in five sections:

    1. Our early forecasts of Trump’s nomination chances weren’t based on a statistical model, which may have been most of the problem.

    2. Trump’s nomination is just one event, and that makes it hard to judge the accuracy of a probabilistic forecast.

    3. The historical evidence clearly suggested that Trump was an underdog, but the sample size probably wasn’t large enough to assign him quite so low a probability of winning.

    4. Trump’s nomination is potentially a point in favor of “polls-only” as opposed to “fundamentals” models.

    5. There’s a danger in hindsight bias, and in overcorrecting after an unexpected event such as Trump’s nomination …

    https://fivethirtyeight.com/features/how-i-acted-like-a-pundit-and-screwed-up-on-donald-trump

    J'aime

Laisser un commentaire

Entrez vos coordonnées ci-dessous ou cliquez sur une icône pour vous connecter:

Logo WordPress.com

Vous commentez à l'aide de votre compte WordPress.com. Déconnexion / Changer )

Image Twitter

Vous commentez à l'aide de votre compte Twitter. Déconnexion / Changer )

Photo Facebook

Vous commentez à l'aide de votre compte Facebook. Déconnexion / Changer )

Photo Google+

Vous commentez à l'aide de votre compte Google+. Déconnexion / Changer )

Connexion à %s

%d blogueurs aiment cette page :