Présidentielle américaine/2012: Mais qui a encore besoin d’électeurs quand on a Nate Silver? (Did Voter of the year Nate Silver help Obama’s reelection?)

Soudain, Norman se sentit fier. Tout s’imposait à lui, avec force. Il était fier. Dans ce monde imparfait, les citoyens souverains de la première et de la plus grande Démocratie Electronique avaient, par l’intermédiaire de Norman Muller (par lui), exercé une fois de plus leur libre et inaliénable droit de vote. Le Votant (Isaac Asimov, 1955)
Le fait même de poser une question peut inventer un résultat car elle fait appel à l’imaginaire du sondé qui n’y avait pas encore réfléchi. Alain Garrigou
D’après les journaux, les sondages montrent que la plupart des gens croient les journaux qui déclarent que la plupart des gens croient les sondages qui montrent que la plupart des gens ont lu les journaux qui conviennent que les sondages montrent qu’il va gagner. Mark Steyn
Le premier ordinateur est livré à l’United States Census Bureau le 30 mars 1951 et mis en service le 14 juin. Le cinquième (construit pour l’Atomic Energy Commission) a été utilisé par CBS pour prédire l’issue de l’élection présidentielle de 1952 (alors que les sondages réalisés « humainement » donnaient Eisenhower perdant). À partir d’un échantillon d’un pour cent des votants il prédit qu’Eisenhower aurait été élu président, chose que personne n’aurait pu croire, mais UNIVAC avait vu juste. Wikipedia
UNIVAC I came to the public’s attention in 1952, when CBS used one to predict the outcome of the presidential election. The computer correctly predicted the Eisenhower victory, but CBS did not release that information until after the election because the race was thought to be close. CNN
What accounts for the persistent and often wide ranging divergence between polls? The most common answer is that there are fundamental variations in the pool of respondents sampled. For example, polls typically target a particular population: adults at large, registered voters, likely voters, actual voters, and all these categories can be infinitely subdivided and, in labyrinthine ways, overlap. Further muddying already turbid waters, each one of these populations tends to be more or less Republican or Democrat so every poll relies upon some algorithmic method to account for these variations and extrapolate results calibrated in light of them. These methods are themselves borne out of a multiplicity of veiled political assumptions driving the purportedly objective analysis in one direction or another, potentially tincturing the purity of mathematical data with ideological agenda. Math doesn’t lie but those who make decisions about what to count and how to count it surely do. Another problem is that voter self-identification, a crucial ingredient in any poll, is both fluid and deceptive. Consider that while approximately 35% of all voters classify themselves as “independents”, only 10% of these actually have no party affiliation. In other words, in any given year, voters registered with a certain party might be inspired to vote independently or even switch sides without surrendering their party membership. These episodic fits of quasi-independence can create the illusion that there are grand tectonic shifts in the ideological makeup of the voting public. It’s worth noting that the vast majority of so-called independents pretty reliably vote with their party of registration. The problem of self-identification is symptomatic of the larger difficulty that polling, for all its mathematical pretensions, depends on the human formulation of questions to be interpreted and then answered by other human beings. Just as the questions posed can be loaded with hidden premises and implicit political judgments, the responses solicited can be more or less honest, clear, and well-considered. It seems methodologically cheap to proudly claim scientific exactitude after counting the yeas and nays generated by the hidden complexity of these exchanges. Measuring what are basically anecdotal reports with number doesn’t magically transform a species of hearsay into irrefragable evidence any more than it would my mother’s homespun grapevine of gossip. The ambiguous contours of human language resist the charms of arithmetic. The ultimate value of any polling is always a matter to be contextually determined, especially in light of our peculiar electoral college which isolates the impact of a voting population within its state. So the oft cited fact that 35% of voters consider themselves independent might seem like a count of great magnitude but most of those reside in states, like California and New York, whose distribution of its electoral college votes is a foregone conclusion. When true independent voters in actual swing states are specifically considered, then only 3-5% of the voting population is, in any meaningful sense, genuinely undecided. Despite their incessant production, it is far from clear how informative we can consider polls that generally track the popular vote since, in and of itself, the popular vote decides nothing. Ivan Kenneally

Attention: un bruit peut en cacher un autre !

Mais qui parlera de l’influence médiatique et donc proprement électorale de nos Nate Silver?

Alors qu’au lendemain de la relativement courte réélection du Père Noël de Chicago, où, entre la désaffection apparemment inattendue d’une partie d’électeurs républicains et d’hispaniques et sans compter la « surprise d’octobre » de l’ouragan Sandy, les Américains ont « une fois de plus exercé leur libre et inaliénable droit de vote », la planète progressiste se félicite de la leçon que viennent d’asséner aux sondeurs et stratèges du GOP les ordinateurs du petit génie de la statistique Nate Silver et son blog du NYT (comme d’ailleurs ceux de Sam Wang ou d’Intrade) …

Comment ne pas repenser (merci Dr Goulu) à cette nouvelle de politique-fiction de 1955 d’Isaac Asimov (« Franchise », « droit de vote » mais traduit par « Le Votant » en français) sur la « démocratie électronique » dans laquelle les États-Unis de 2008 (première année du premier succès de Nate!) se sont déchargés du devoir électoral sur un ordinateur géant (MULTIVAC) permettant de réduire toute la consultation électorale au questionnaire d’un seul électeur, simple employé de magasin de son état?

Mais aussi à l’histoire réelle qui l’avait inspirée, à savoir la prédiction il y a exactement 60 ans par le premier superordinateur (UNIVAC I) qu’avait livré la firme Remington Rand au Bureau du recensement américain et qui, à partir d’un échantillon d’un pour cent de la population et contre les sondages humains, avait prédit pour CBS le succès du républicain Eisenhower contre le démocrate Stevenson?

Information que CBS avait d’ailleurs, contrairement au NYT de 2012, gardé cachée pour ne pas interférer dans une élection elle aussi annoncée très serrée ?

5 leçons scientifiques du succès de Nate Silver

Tom Roud

Café sciences

Le 07/11/2012

La communauté scientifico-geek s’est trouvée un nouveau héros au cours de cette élection présidentielle américaine: Nate Silver, l’auteur du formidable blog 538, qui, à l’heure où je vous parle, a fait un sans faute au niveau de la prédiction des résultats état par état (la Floride restant indéterminée, ce qu’il avait d’ailleurs aussi prévu).

On peut tirer 5 leçons de ce succès de Silver:

ce n’est pas la première fois que Silver réussit à prédire le résultat d’une élection présidentielle état par état. C’est en réalité la seconde fois après 2008. On dit parfois en science qu’un seul résultat spectaculaire ne vaut rien sans sa confirmation, l’élection de 2012 confirme à mon sens qu’il ne s’agit pas d’un coup de chance, et donc que ses modèles sont capables de correctement capturer une réalité.

pour qu’un modèle marche, il faut se baser sur des données multiples, bonnes et moins bonnes. Dans le cas présent, tous les sondages accumulés. Le modèle de Silver pondère parfaitement tous ces sondages, et surtout permet de nuancer tous les « outliers ». Par exemple, le 18 Octobre, un sondage Gallup très commenté politiquement donnait Romney 7 points devant Obama. Silver a tout de suite dit qu’il s’agissait de bruit (« polls that look like outliers normally prove to be so »). Une approche raisonnée identifie les tendances, là où le commentaire politique se focalise sur le bruit.

Inspiré de http://xkcd.com/904. Oui, je sais, c’est du Comic Sans.

un modèle hyper simple peut pourtant être étonnamment prédictif. Les modèles de Silver reposent sur l’idée que les populations socio-économiquement similaires votent de la même façon. En couplant cette idée avec les données de la démographie et les sondages disponibles, Silver a pu « projeter » les résultats des états même en l’absence de sondage sur ceux-ci. Comme disait quelqu’un sur ma TL ce matin, le modèle tient sur une feuille Excel. Les modèles les plus simples ne sont donc pas les moins efficaces, un principe de parcimonie scientifique souvent absent de nombreuses modélisations (oui, je pense à toi, « systems biology »)

le corollaire, c’est qu’un système complexe est modélisable tant qu’on identifie correctement des « causes premières ». Nul ne peut contester que les déterminants du vote sont multiples, et que la nature humaine est complexe; pourtant, le modèle de Silver prouve qu’ on peut manifestement arriver à comprendre et prédire relativement finement des comportements. Une leçon à retenir à chaque fois qu’on vous dira que nul ne peut modéliser un système complexe multifactoriels (comme au hasard le climat)

enfin, la science, ce sont des prédictions. Silver s’est mouillé (allant jusque parier avec un éditorialiste critiquant son modèle), a été critiqué pour cela y compris dans son propre journal. C’est la grosse différence entre une approche quantitative et le reste: on sort des prédictions, on les valide ou on les réfute, et on améliore ainsi le modèle au cours du temps. Processus totalement inconnu des nombreux éditorialistes.

Grâce soit donc rendue au premier psychohistorien !

Voir aussi:

US elections 2012

The Signal and the Noise by Nate Silver – review

Nate Silver made headlines predicting Obama’s win. Ruth Scurr learns how he did it

Ruth Scurr

The Guardian

9 November 2012

Obama aside, the indubitable hero of the 2012 US presidential election was the statistician and political forecaster Nate Silver. His blog, FiveThirtyEight.com, syndicated by the New York Times since 2010, correctly predicted the results of the election in 50 out of 50 states. When the worldwide media was universally proclaiming the race too close to call and the pundits were deriding mathematical models, FiveThirtyEight.com steadily argued that the odds made clear that Obama would win. On election day, Silver’s final forecast was that Obama had a 90.9% chance of winning.

The Signal and the Noise: The Art and Science of Prediction

Nate Silver

the Guardian

Reflecting on the electoral impact of Hurricane Sandy, Silver was the voice of sanity in the last few days of the race. On 5 November he suggested that « historical memory » might consider Sandy pivotal, but in fact Obama had been rebounding slowly but surely in the polls since his lows in early October. Listing eight alternative explanations for Obama’s gains after the storm hit – including recent encouraging economic news – Silver concluded that the gains were « over-determined »: a lot of variables might have contributed to the one result.

As the votes were counted and the states declared themselves, vindicating the FiveThirtyEight.com predictions in every single case, Silver’s newly published book became an overnight bestseller.

The first thing to note about The Signal and the Noise is that it is modest – not lacking in confidence or pointlessly self-effacing, but calm and honest about the limits to what the author or anyone else can know about what is going to happen next. Across a wide range of subjects about which people make professional predictions – the housing market, the stock market, elections, baseball, the weather, earthquakes, terrorist attacks – Silver argues for a sharper recognition of « the difference between what we know and what we think we know » and recommends a strategy for closing the gap.

Recognition of the gap is not new: there are plenty of political theorists and scientists droning on about it already, in the manner of the automated voice on the tube when train and platform don’t quite meet. Strategies for closing, or at least narrowing, the gap between what we know and what we think we know in specific contexts, are rarer, specialised, and probably pretty hard for anyone outside a small circle of experts to understand.

What Silver has to offer is a lucid explanation of how to think probabilistically. In a promising start, he claims that his model – based on a theorem inspired by Thomas Bayes, the 18th-century English mathematician – has more in common with how soldiers and doctors think than with the cognitive habits of TV pundits. « Much of the most thoughtful work I have found on the use and abuse of statistical models, and on the proper role of prediction, comes from people in the medical profession, » Silver reports. You can quite easily get away with a stupid model if you are a political scientist, but in medicine as in war, « stupid models kill people. It has a sobering effect ».

Silver is not a medical doctor, even if a version of the Hippocratic oath – Primum non nocere (First, do no harm) – is the guiding principle of his probabilistic thinking: « If you can’t make a good prediction, it is very often harmful to pretend that you can. » After graduating from Chicago with a degree in economics in 2000, he worked as a transfer-pricing consultant for the accounting firm KPMG: « The pay was honest and I felt secure, » but he soon became bored. In his spare time, on long flights and in airports, he started compiling spreadsheets of baseball statistics that later became the basis for a predictive system called Pecota.

Silver delivers a candid account of the hits and misses of Pecota, the lessons learned and the system’s limitations: « It’s hard to have an idea that nobody else has thought of. It’s even harder to have a good idea – and when you do, it will soon be duplicated. »

After his interest in baseball peaked, he moved on to predicting electoral politics. The idea for FiveThirtyEight (named after the 538 votes in the electoral college) arrived while Silver was waiting for a delayed flight at New Orleans airport in 2008. Initially, he made predictions about the electoral winners simply by taking an average of the polls after weighting them according to past accuracy. The model gradually became more intricate: his method centres on crunching the data from as many previous examples as possible; imagine a really enormous spreadsheet. He accurately forecast the outcome of 49 out of 50 states in the 2008 presidential election and the winner of all 35 senate races.

Challenged by the economist Justin Wolfers and his star student David Rothschild as to why he continues to make forecasts through FiveThirtyEight despite fierce competition from larger prediction websites such as Intrade (which covers « everything from who will win the Academy Award for Best Picture to the chance of an Israeli air strike on Iran ») Silver replies: « I find making the forecasts intellectually interesting – and they help to produce traffic for my blog. » His unabashed honesty seems the open secret of his success.

Bayes, who lends his name to Silver’s theorem, was « probably born in 1701 – although it might have been 1702 ». Silver is a statistician, not a historian, so he reports the fact of the uncertainty without elaboration. As a Nonconformist, Bayes could not go to Oxford or Cambridge, but was eventually elected a fellow of the Royal Society. His most famous work, « An Essay toward Solving a Problem in the Doctrine of Chances », was published posthumously in 1763. Silver summarises it as: « a statement – expressed both mathematically and philosophically – about how we learn about the universe: that we learn about it through approximation, getting closer and closer to the truth as we gather more evidence. »

The attraction of Bayes’s theorem, as Silver presents it, is that it concerns conditional probability: the probability that a theory or hypothesis is true if some event has happened. He applies the theorem to 9/11. Prior to the first plane striking the twin towers, the initial estimate of how likely it was that terrorists would crash planes into Manhattan skyscrapers is given as 0.005%. After the first plane hit, the revised probability of a terror attack comes out at 38%. Following the second plane hitting the revised estimate that it was a deliberate act jumps to 99.99%. « One accident on a bright sunny day in New York was unlikely enough, but a second one was almost a literal impossibility, as we all horribly deduced. »

Fastidiously aware of the gap between what we know and what we think we know, Silver proceeds wryly to delineate the limits of what he has achieved with this application of Bayes theorem to 9/11: « It’s not that much of an accomplishment, however, to describe history in statistical terms. »

Silver ends by advocating a balance between curiosity and scepticism when it comes to making predictions: « The more eagerly we commit to scrutinising and testing our theories, the more readily we accept that our knowledge of the world is uncertain, the more willingly we acknowledge that perfect prediction is impossible, the less we will live in fear of our failures, and the more freedom we will have to let our minds flow freely. By knowing more about what we don’t know, we may get a few more predictions right. »

More modesty and effort, in other words, would improve the predictive performance of everyone from the TV pundits to the political scientists, and members of the public trying to understand what is likely to happen next. Just do not expect, Silver warns, to fit a decent prediction on a bumper sticker. « Prediction is difficult for us for the same reason that it is so important: it is where objective and subjective reality intersect. » You would probably need to be a stat geek to drive around with that on the back of your car, but it might just fit if the lettering were small.

• Ruth Scurr’s Fatal Purity: Robespierre and the French Revolution is published by Vintage.

 Voir également:

FiveThirtyEight – Nate Silver\’s Political Calculus

Methodology

Our Senate forecasts proceed in seven distinct stages, each of which is described in detail below. For more detail on some of the terms below please see our FiveThirtyEight glossary.

Stage 1. Weighted Polling Average

Polls released into the public domain are collected together and averaged, with the components weighted on three factors:

* Recency. More recent polls receive a higher weight. The formula for discounting older polling is based on an exponential decay formula, with the premium on newness increasing the closer the forecast is made to the election. In addition, when the same polling firm has released multiple polls of a particular race, polls other than its most recent one receive an additional discount. (We do not, however, simply discard an older poll simply because a firm has come out with a newer one in the same race.)

* Sample size. Polls with larger sample sizes receive higher weights. (Note: no sample size can make up for poor methodology. Our model accounts for diminishing returns as sample size increases, especially for less reliable pollsters.)

* Pollster rating. Lastly, each survey is rated based on the past accuracy of “horse race” polls commissioned by the polling firm in elections from 1998 to the present. The procedure for calculating the pollster ratings is described at length here, and the most recent set of pollster ratings can be found here. All else being equal, polling organizations that, like The New York Times, have staff that belong to The American Association for Public Opinion Research (A.A.P.O.R.), or that have committed to the disclosure and transparency standards advanced by the National Council on Public Polls, receive higher ratings, as we have found that membership in one of these organizations is a positive predictor of the accuracy of a firm’s polling on a going-forward basis

The procedure for combining these three factors is modestly complex, and is described in more detail here. But, in general, the weight assigned to a poll is designed to be proportional to the predictive power that it should have in anticipating the results of upcoming elections. Note that it is quite common for a particular survey from a mediocre pollster to receive a higher weight than one from a strong pollster, if its poll happens to be significantly more recent or if it uses a significantly larger sample size.

Certain types of polls are not assigned a weight at all, but are instead dropped from consideration entirely, and not used in FiveThirtyEight’s forecasts nor listed in its polling database. from the firms Strategic Vision and Research 2000, which have been accused – with compelling statistical evidence in each case – of having fabricated some or all of their polling, are excluded. So are interactive (Internet) polls conducted by the firm Zogby, which are associated with by far the worst pollster rating, and which probably should not be considered scientific polls, as their sample consists of volunteers who sign up to take their polls, rather than a randomly-derived sample. (Traditional telephone polls conducted by Zogby are included in the averages, as are Internet polls from firms other than Zogby.)

Polls are also excluded from the Senate model if they are deemed to meet FiveThirtyEight’s definition of being “partisan.” FiveThirtyEight’s definition of a partisan poll is quite narrow, and is limited to polls conducted on behalf of political candidates, campaign committees, political parties, registered PACs, or registered 527 groups. We do not exclude polls simply because the pollster happens to be a Democrat or a Republican, because the pollster has conducted polling for Democratic or Republican candidate in the past, or because the media organization it is polling for is deemed to be liberal or conservative. The designation is based on who the poll was conducted for, and not who conducted it. Note, however, that there are other protections in place (see Stage 2) if a polling firm produces consistently biased results.

Stage 2. Adjusted Polling Average

After the weighted polling average is calculated, it is subject to three additional types of adjustments.

* The trendline adjustment. An estimate of the overall momentum in the national political environment is determined based on a detailed evaluation of trends within generic congressional ballot polling. (The procedure, which was adopted from our Presidential forecasting model, is described at more length here.) The idea behind the adjustment is that, to the extent that out-of-date polls are used at all in the model (because of a lack of more recent polling, for example), we do not simply assume that they reflect the present state of the race. For example, if the Democrats have lost 5 points on the generic ballot since the last time a state was polled, the model assumes, in the absence of other evidence, that they have lost 5 points in that state as well. In practice, the trendline adjustment is designed to be fairly gentle, and so it has relatively little effect unless there has been especially sharp change in the national environment or if the polling in a particular state is especially out-of-date.

* The house effects adjustment. Sometimes, polls from a particular polling firm tend consistently to be more favorable toward one or the other political party. Polls from the firm Rasmussen Reports, for example, have shown results that are about 2 points more favorable to the Republican candidate than average during this election cycle. It is not necessarily correct to equate a house effect with “bias” – there have been certain past elections in which pollsters with large house effects proved to be more accurate than pollsters without them – and systematic differences in polling may result from a whole host of methodological factors unrelated to political bias. This nevertheless may be quite useful to account for: Rasmussen showing a Republican with a 1-point lead in a particular state might be equivalent to a Democratic-leaning pollster showing a 4-point lead for the Democrat in the same state. The procedure for calculating the house effects adjustment is described in more detail here. A key aspect of the house effects adjustment is that a firm is not rewarded by the model simply because it happens to produce more polling than others; the adjustment is calibrated based on what the highest-quality polling firms are saying about the race.

* The likely voter adjustment. Throughout the course of an election year, polls may be conducted among a variety of population samples. Some survey all American adults, some survey only registered voters, and others are based on responses from respondents deemed to be “likely voters,” as determined based on past voting behavior or present voting intentions. Sometimes, there are predictable differences between likely voter and registered voter polls. In 2010, for instance, polls of likely voters are about 4 points more favorable to the Republican candidate, on average, than those of registered voters, perhaps reflecting enthusiasm among Republican voters. And surveys conducted among likely voters are about 7 points more favorable to the Republican than those conducted among all adults, whether registered to vote or not.

By the end of the election cycle, the majority of pollsters employ a likely voter model of some kind. Additionally, there is evidence that likely voter polls are more accurate, especially in Congressional elections. Therefore, polls of registered voters (or adults) are adjusted to be equivalent to likely voter polls; the magnitude of the adjustment is based on a regression analysis of the differences between registered voter polls and likely voter polls throughout the polling database, holding other factors like the identity of the pollster constant.

Step 3: FiveThirtyEight Regression

In spite of the several steps that we undertake to improve the reliability of the polling data, sometimes there just isn’t very much good polling in a race, or all of the polling may tend to be biased in one direction or another. (As often as not, when one poll winds up on the wrong side of a race, so do most of the others). In addition, we have found that electoral forecasts can be improved when polling is supplemented by other types of information about the candidates and the contest. Therefore, we augment the polling average by using a linear regression analysis that attempts to predict the candidates’ standing according to several non-poll factors:

A state’s Partisan Voting Index

The composition of party identification in the state’s electorate (as determined through Gallup polling)

The sum of individual contributions received by each candidate as of the last F.E.C. reporting period (this variable is omitted if one or both candidates are new to the race and have yet to complete an FEC filing period)

Incumbency status

For incumbent Senators, an average of recent approval and favorability ratings

A variable representing stature, based on the highest elected office that the candidate has held. It takes on the value of 3 for candidates who have been Senators or Governors in the past; 2 for U.S. Representatives, statewide officeholders like Attorneys General, and mayors of cities of at least 300,000 persons; 1 for state senators, state representatives, and other material elected officeholders (like county commissioners or mayors of small cities), and 0 for candidates who have not held a material elected office before.

Variables are dropped from the analysis if they are not statistically significant at the 90 percent confidence threshold.

Step 4: FiveThirtyEight Snapshot

This is the most straightforward step: the adjusted polling average and the regression are combined into a ‘snapshot’ that provides the most comprehensive evaluation of the candidates’ electoral standing at the present time. This is accomplished by treating the regression result as though it were a poll: in fact, it is assigned a poll weight equal to a poll of average quality (typically around 0.60) and re-combined with the other polls of the state.

If there are several good polls in race, the regression result will be just one of many such “polls”, and will have relatively little impact on the forecast. But in cases where there are just one or two polls, it can be more influential. The regression analysis can also be used to provide a crude forecast of races in which there is no polling at all, although with a high margin of error.

Step 5. Election Day projection

It is not necessarily the case, however, that the current standing of the candidates – as captured by the snapshot — represents the most accurate forecast of where they will finish on Election Day. (This is one of the areas in which we’ve done a significant amount of work in transitioning FiveThirtyEight’s forecast model to The Times.) For instance, large polling leads have a systematic tendency to diminish in races with a large number of undecided voters, especially early in an election cycle. A lead of 48 percent to 25 percent with a high number of undecided voters, for example, will more often than not decrease as Election Day approaches. Under other circumstances (such an incumbent who is leading a race in which there are few undecided voters), a candidate’s lead might actually be expected to expand slightly.

Separate equations are used for incumbent and open-seat races, the formula for the former being somewhat more aggressive. There are certain circumstances in which an incumbent might actually be a slight underdog to retain a seat despite of having a narrow polling lead — for instance, if there are a large number of undecided voters — although this tendency can sometimes be overstated.

Implicit in this process is distributing the undecided vote; thus, the combined result for the Democratic and the Republican candidate will usually reflect close to 100 percent of the vote, although a small reservoir is reserved for independent candidates in races where they are on the ballot. In races featuring three or more viable candidates (that is, three candidates with a tangible chance of winning the lection), however, such as the Florida Senate election in 2010, there is little empirical basis on which to make a “creative” vote allocation, and so the undecided voters are simply divided evenly among the three candidates.

Step 6. Error analysis

Just as important as estimating the most likely finish of the two candidates is determining the degree of uncertainty intrinsic to the forecast.

For a variety of reasons, the magnitude of error associated with elections outcomes is higher than what pollsters usually report. For instance, in polls of Senate elections since 1998 conducted in the final three weeks of the campaign, the average error in predicting the margin between the two candidates has been about 5 points, which would translate into a roughly 6-point margin of error. This may be twice as high as the 3- or 4-percent margins of error that pollsters typically report, which reflects only sample variance, but not other ambiguities inherent to polling. Combining polls together may diminish this margin of error, but their errors are sometimes correlated, and they are nevertheless not as accurate as their margins-of-error would imply.

Instead of relying on any sort of theoretical calculation of the margin of error, therefore, we instead model it directly based on the past performance of our forecasting model in Senatorial elections since 1998. Our analysis has found that certain factors are predictably associated with a greater degree of uncertainty. For instance:

The error is higher in races with fewer polls

The error is higher in races where the polls disagree with one another.

The error is higher when there are a larger number of undecided voters.

The error is higher when the margin between the two candidates is lopsided.

The error is higher the further one is from Election Day.

Depending on the mixture of these circumstances, a lead that is quite safe under certain conditions may be quite vulnerable in others. Our goal is simply to model the error explicitly, rather than to take a one-size-fits-all approach.

Step 7. Simulation.

Knowing the mean forecast for the margin between the two candidates, and the standard error associated with it, suffices mathematically to provide a probabilistic assessment of the outcome of any one given race. For instance, a candidate with a 7-point lead, in a race where the standard error on the forecast estimate is 5 points, will win her race 92 percent of the time.

However, this is not the only piece of information that we are interested in. Instead, we might want to know how the results of particular Senate contests are related to one another, in order to determine for example the likelihood of a party gaining a majority, or a supermajority.

Therefore, the error associated with a forecast is decomposed into local and national components by means of a sum-of-squares formula. For Congressional elections, the ‘national’ component of the error is derived from a historical analysis of generic ballot polls: how accurately the generic ballot forecasts election outcomes, and how much the generic ballot changes between Election Day and the period before Election Day. The local component of the error is then assumed to be the residual of the national error from the sum-of-squares formula, i.e.:

The local and national components of the error calculation are then randomly generated (according to a normal distribution) over the course of 100,000 simulation runs. In each simulation run, the degree of national movement is assumed to be the same for all candidates: for instance, all the Republican candidates might receive a 3-point bonus in one simulation, or all the Democrats a 4-point bonus in another. The local error component, meanwhile, is calculated separately for each individual candidate or state. In this way, we avoid the misleading assumption that the results of each election are uncorrelated with one another.

A final step in calculating the error is in randomly assigning a small percentage of the vote to minor-party candidates, which is assumed to follow a gamma distribution.

A separate process is followed where three or more candidates are deemed by FiveThirtyEight to be viable in a particular race, which simulates exchanges of voting preferences between each pairing of candidates. This process is structured such that the margins of error associated with multi-candidate races are assumed to be quite high, as there is evidence that such races are quite volatile.

Voir encore:

50th anniversary of the UNIVAC I

CNN

BLUE BELL, Pennsylvania (CNN) — Fifty years ago — on June 14, 1951 — the U.S. Census Bureau officially put into service what it calls the world’s first commercial computer, known as UNIVAC I.

UNIVAC stands for Universal Automatic Computer. The first model was built by the Eckert-Mauchly Computer Corp., which was purchased by Remington Rand shortly before the UNIVAC went on sale.

Rights to the UNIVAC name are currently held by Unisys.

Unisys spokesmen Guy Isnous and Ron Smith say other early users of UNIVACs included the U.S. Air Force, the U.S. Army, the Atomic Energy Commission, General Electric, Metropolitan Life, US Steel, and DuPont.

The UNIVAC was not the first computer ever built. A host of companies, including Eckert-Mauchly, Remington Rand, IBM, and others, all were developing computers for commercial applications at the same time.

Perhaps the most famous computer of the era was the ENIAC, a computer developed for the U.S. military during World War II. Other computers developed in the 1940s were mostly used by academia.

But the UNIVAC I was the first computer to be widely used for commercial purposes — 46 machines were built, for about $1 million each.

Compared to other computers of the era, the UNIVAC I machines were small — about the size of a one-car garage. Each contained about 5,000 vacuum tubes, all of which had to be easily accessible for replacement because they burned out frequently.

Keeping all those vacuum tubes cool was also a major design challenge. The machines were riddled with pipes that circulated cold water to keep the temperature down.

Each unit was so bulky and needed so much maintenance that some of the companies that bought them never moved them to their own facility, instead leaving them on-site at Remington Rand.

UNIVAC I came to the public’s attention in 1952, when CBS used one to predict the outcome of the presidential election. The computer correctly predicted the Eisenhower victory, but CBS did not release that information until after the election because the race was thought to be close.

Voir enfin:

Polling Opinion: More Sorcery Than Science

Ivan Kenneally

November 5, 2012

At first glance, political opinion polls seems like the nadir of modern liberal democracy. In their special alchemy they congeal a sensitivity to the will of the people and an emphasis on mathematical exactitude. The poll is the culmination of the peculiar modern marriage of science and popular sovereignty, the technocratic and the democratic. To borrow from Hamilton, and by borrow I mean disfigure, the poll is the ultimate success of our “grand experiment in self-governance.”

Of course, on another interpretation, they are completely useless.

As the estimable Jay Cost points out in the Weekly Standard, the polls this year simply don’t seem to add up, collectively defeated by the strident arithmetic that underwrites their purported value. Depending on what pollster you ask, Romney is poised for an explosive landslide of a victory, or about to win a historically close election, or is about to lose decisively, in a fit of humiliation. If you ask Paul Krugman, and I don’t advise that you should unless you’ve been inoculated against shrill, he will call you stupid for suggesting Romney has any chance at victory.

What all these positions have in common is an appeal to the unassailability of mathematics, that last frontier that resists our postmodern inclinations to promiscuously construct and deconstruct the truth like a pile of lego pieces.

What accounts for the persistent and often wide ranging divergence between polls? The most common answer is that there are fundamental variations in the pool of respondents sampled. For example, polls typically target a particular population: adults at large, registered voters, likely voters, actual voters, and all these categories can be infinitely subdivided and, in labyrinthine ways, overlap. Further muddying already turbid waters, each one of these populations tends to be more or less Republican or Democrat so every poll relies upon some algorithmic method to account for these variations and extrapolate results calibrated in light of them. These methods are themselves borne out of a multiplicity of veiled political assumptions driving the purportedly objective analysis in one direction or another, potentially tincturing the purity of mathematical data with ideological agenda. Math doesn’t lie but those who make decisions about what to count and how to count it surely do.

Another problem is that voter self-identification, a crucial ingredient in any poll, is both fluid and deceptive. Consider that while approximately 35% of all voters classify themselves as “independents”, only 10% of these actually have no party affiliation. In other words, in any given year, voters registered with a certain party might be inspired to vote independently or even switch sides without surrendering their party membership. These episodic fits of quasi-independence can create the illusion that there are grand tectonic shifts in the ideological makeup of the voting public. It’s worth noting that the vast majority of so-called independents pretty reliably vote with their party of registration.

The problem of self-identification is symptomatic of the larger difficulty that polling, for all its mathematical pretensions, depends on the human formulation of questions to be interpreted and then answered by other human beings. Just as the questions posed can be loaded with hidden premises and implicit political judgments, the responses solicited can be more or less honest, clear, and well-considered. It seems methodologically cheap to proudly claim scientific exactitude after counting the yeas and nays generated by the hidden complexity of these exchanges. Measuring what are basically anecdotal reports with number doesn’t magically transform a species of hearsay into irrefragable evidence any more than it would my mother’s homespun grapevine of gossip. The ambiguous contours of human language resist the charms of arithmetic.

The ultimate value of any polling is always a matter to be contextually determined, especially in light of our peculiar electoral college which isolates the impact of a voting population within its state. So the oft cited fact that 35% of voters consider themselves independent might seem like a count of great magnitude but most of those reside in states, like California and New York, whose distribution of its electoral college votes is a foregone conclusion. When true independent voters in actual swing states are specifically considered, then only 3-5% of the voting population is, in any meaningful sense, genuinely undecided. Despite their incessant production, it is far from clear how informative we can consider polls that generally track the popular vote since, in and of itself, the popular vote decides nothing.

So the mathematical scaffolding of polls all presume non-mathematical foundations, stated and unstated assumptions, partisan inclinations and non-partisan miscalculations. When the vertiginous maelstrom of numbers fails in its most fundamental task, alighting disorder with order, bringing sense to a wilderness of senselessness, then where can we turn for guidance? I can’t just wait for the results Tuesday night–the modern in my marrow craves not just certainty but prediction, absolute knowledge as prologue. There’s no technocratic frisson in finding anything out after the fact, without the prescience of science, which appeals just as much to our desire to be clever as it does to our craving for knowledge.

I will suggest what no political scientist in America is suggesting: set aside the numbing numbers and the conflicting claims to polling precision and follow me follow Aristotle. We must survey what is available to us in ordinary experience, what we can confirm as a matter of pre-scientific perception, the ancient realism that appealed not to computational models, but the evidence I can see with my own eyes.

What do I see with these eyes? A president running as a challenger, pretending he wasn’t in charge the last four years of blight and disappointment. I see a less than commanding Commander in Chief trying to slither past a gathering scandal that calls into suspicion his character and competence to protect his country. I see a wheezing economy, so infirm our president celebrated a palsied jobs report as evidence of our march to prosperity. I see transparent class warfare that insidiously assumes our embattled middle class resents the rich more than they resent their own shrinking economic opportunity and that women feel flattered and emboldened when condescendingly drawn into a magically conjured cultural war.

I see enthusiastic crowds form around the man they think will deliver them from four years of gruesome ineffectiveness and a defeated left, dispirited and weary, unlikely to convert but less likely to surge. I see ads about Big Bird and and a terror of confronting big issues and a president who seems as bored by his performance as we are. Obama does not look like a winner, not to these eyes.

So in an election year hyper-charged with ideological heat, and polling data potentially varnished by self-fulfilling prophecy and partisan wishful thinking, I tend to rely upon an old school conception of realism: what I can see and what I can modestly infer from what I see. Today, as I write this, I see a Romney victory, however narrowly achieved. This would also be a big victory for the common sense of ordinary political perception over the tortured numbers games that aim to capture it precisely, or to mold it presumptuously.

5 Responses to Présidentielle américaine/2012: Mais qui a encore besoin d’électeurs quand on a Nate Silver? (Did Voter of the year Nate Silver help Obama’s reelection?)

  1. […] Déclaration Jacque PARISEAU 1995 « C’est vrai, c’est vrai qu’on a été battus, au fond, par quoi ? Par l’argent puis des votes ethniques, essentiellement. Alors, ça veut dire que, la prochaine fois, au lieu d’être 60 ou 61 % à voter « Oui », on sera 63 ou 64 % et ça suffira. » — Jacques Parizeau (discours du 30 octobre 1995) Comme pour le Québec, les Républicains ont raté leur référendum “par l’argent puis … […]

    J'aime

  2. jcdurbant dit :

    FROM TWO TO TWENTY-FIVE PERCENT (Donald Trump has a 20 percent chance of becoming president, Nate Silver)

    I recently estimated Trump’s chance of becoming the GOP nominee at 2 percent.

    Nate Silver (Aug. 6, 2015)

    http://fivethirtyeight.com/features/donald-trumps-six-stages-of-doom/

    Lately, pundits and punters seem bullish on Donald Trump, whose chances of winning the Republican presidential nomination recently inched above 20 percent for the first time at the betting market Betfair. Perhaps the conventional wisdom assumes that the aftermath of the terrorist attacks in Paris will play into Trump’s hands, or that Republicans really might be in disarray. If so, I can see where the case for Trump is coming from, although I’d still say a 20 percent chance is substantially too high.

    Quite often, however, the Trump’s-really-got-a-chance! case is rooted almost entirely in polls. If nothing Trump has said so far has harmed his standing with Republicans, the argument goes, why should we expect him to fade later on?

    One problem with this is that it’s not enough for Trump to merely avoid fading. Right now, he has 25 to 30 percent of the vote in polls among the roughly 25 percent of Americans who identify as Republican. (That’s something like 6 to 8 percent of the electorate overall, or about the same share of people who think the Apollo moon landings were faked.) As the rest of the field consolidates around him, Trump will need to gain additional support to win the nomination. That might not be easy, since some Trump actions that appeal to a faction of the Republican electorate may alienate the rest of it. Trump’s favorability ratings are middling among Republicans (and awful among the broader electorate).

    Trump will also have to get that 25 or 30 percent to go to the polls. For now, most surveys cover Republican-leaning adults or registered voters, rather than likely voters. Combine that with the poor response rates to polls and the fact that an increasing number of polls use nontraditional sampling methods, and it’s not clear how much overlap there is between the people included in these surveys and the relatively small share of Republicans who will turn up to vote in primaries and caucuses.

    But there’s another, more fundamental problem. That 25 or 30 percent of the vote isn’t really Donald Trump’s for the keeping. In fact, it doesn’t belong to any candidate. If past nomination races are any guide, the vast majority of eventual Republican voters haven’t made up their minds yet.

    It can be easy to forget it if you cover politics for a living, but most people aren’t paying all that much attention to the campaign right now. Certainly, voters are consuming some campaign-related news. Debate ratings are way up, and Google searches for topics related to the primaries1 have been running slightly ahead of where they were at a comparable point of the 2008 campaign, the last time both parties had open races. But most voters have a lot of competing priorities. Developments that can dominate a political news cycle, like Trump’s frenzied 90-minute speech in Iowa earlier this month, may reach only 20 percent or so of Americans …

    https://fivethirtyeight.com/features/dear-media-stop-freaking-out-about-donald-trumps-polls/

    Trump is at only 36 percent in our national polling average, while Clinton is at only 43 percent. Gary Johnson, the Libertarian Party candidate whom our model explicitly includes in the forecast, is polling in the double digits in some polls, while we’re seeing a significant undecided vote and some votes for other candidates, such as Jill Stein of the Green Party. Historically, high numbers of undecided voters contribute to uncertainty and volatility. So do third-party candidates, whose numbers sometimes fade down the stretch run.6 With Clinton at only 43 percent nationally, Trump doesn’t need to take away any of her voters to win. He just needs to consolidate most of the voters who haven’t committed to a candidate yet …

    http://fivethirtyeight.com/features/donald-trump-has-a-20-percent-chance-of-becoming-president

    Trump is one of the most astonishing stories in American political history. If you really expected the Republican front-runner to be bragging about the size of his anatomy in a debate, or to be spending his first week as the presumptive nominee feuding with the Republican speaker of the House and embroiled in a controversy over a tweet about a taco salad, then more power to you. Since relatively few people predicted Trump’s rise, however, I want to think through his nomination while trying to avoid the seduction of hindsight bias. What should we have known about Trump and when should we have known it?

    It’s tempting to make a defense along the following lines:

    Almost nobody expected Trump’s nomination, and there were good reasons to think it was unlikely. Sometimes unlikely events occur, but data journalists shouldn’t be blamed every time an upset happens,2 particularly if they have a track record of getting most things right and doing a good job of quantifying uncertainty.

    We could emphasize that track record; the methods of data journalism have been highly successful at forecasting elections. That includes quite a bit of success this year. The FiveThirtyEight “polls-only” model has correctly predicted the winner in 52 of 57 (91 percent) primaries and caucuses so far in 2016, and our related “polls-plus” model has gone 51-for-57 (89 percent). Furthermore, the forecasts have been well-calibrated, meaning that upsets have occurred about as often as they’re supposed to but not more often.

    But I don’t think this defense is complete — at least if we’re talking about FiveThirtyEight’s Trump forecasts. We didn’t just get unlucky: We made a big mistake, along with a couple of marginal ones.

    The big mistake is a curious one for a website that focuses on statistics. Unlike virtually every other forecast we publish at FiveThirtyEight — including the primary and caucus projections I just mentioned — our early estimates of Trump’s chances weren’t based on a statistical model. Instead, they were what we “subjective odds” — which is to say, educated guesses. In other words, we were basically acting like pundits, but attaching numbers to our estimates.3 And we succumbed to some of the same biases that pundits often suffer, such as not changing our minds quickly enough in the face of new evidence. Without a model as a fortification, we found ourselves rambling around the countryside like all the other pundit-barbarians, randomly setting fire to things.

    There’s a lot more to the story, so I’m going to proceed in five sections:

    1. Our early forecasts of Trump’s nomination chances weren’t based on a statistical model, which may have been most of the problem.

    2. Trump’s nomination is just one event, and that makes it hard to judge the accuracy of a probabilistic forecast.

    3. The historical evidence clearly suggested that Trump was an underdog, but the sample size probably wasn’t large enough to assign him quite so low a probability of winning.

    4. Trump’s nomination is potentially a point in favor of “polls-only” as opposed to “fundamentals” models.

    5. There’s a danger in hindsight bias, and in overcorrecting after an unexpected event such as Trump’s nomination …

    https://fivethirtyeight.com/features/how-i-acted-like-a-pundit-and-screwed-up-on-donald-trump

    J'aime

  3. jcdurbant dit :

    WHEN ALL ELSE FAILS, BLAME THE ELECTORAL COLLEGE ! (Like parliamentary systems such as Canada, Israel, the UK, Germany or India, the Electoral college forces parties and presidential candidates to build broad coalitions – Guess who in 1992 won with only 43 percent of the popular vote but received over 68 percent of the electoral vote ?)

    Under the Electoral College system, presidential elections are decentralized, taking place in the states. Although some see this as a flaw—U.S. Senator Elizabeth Warren opposes the Electoral College expressly because she wants to increase federal power over elections—this decentralization has proven to be of great value. For one thing, state boundaries serve a function analogous to that of watertight compartments on an ocean liner. Disputes over mistakes or fraud are contained within individual states. Illinois can recount its votes, for instance, without triggering a nationwide recount. This was an important factor in America’s messiest presidential election—which was not in 2000, but in 1876.

    That year marked the first time a presidential candidate won the electoral vote while losing the popular vote. It was a time of organized suppression of black voters in the South, and there were fierce disputes over vote totals in Florida, Louisiana, and South Carolina. Each of those states sent Congress two sets of electoral vote totals, one favoring Republican Rutherford Hayes and the other Democrat Samuel Tilden. Just two days before Inauguration Day, Congress finished counting the votes—which included determining which votes to count—and declared Hayes the winner. Democrats proclaimed this “the fraud of the century,” and there is no way to be certain today—nor was there probably a way to be certain at the time—which candidate actually won. At the very least, the Electoral College contained these disputes within individual states so that Congress could endeavor to sort it out. And it is arguable that the Electoral College prevented a fraudulent result.

    Four years later, the 1880 presidential election demonstrated another benefit of the Electoral College system: it can act to amplify the results of a presidential election. The popular vote margin that year was less than 10,000 votes—about one-tenth of one percent—yet Republican James Garfield won a resounding electoral victory, with 214 electoral votes to Democrat Winfield Hancock’s 155. There was no question who won, let alone any need for a recount. More recently, in 1992, the Electoral College boosted the legitimacy of Democrat Bill Clinton, who won with only 43 percent of the popular vote but received over 68 percent of the electoral vote.

    But there is no doubt that the greatest benefit of the Electoral College is the powerful incentive it creates against regionalism. Here, the presidential elections of 1888 and 1892 are most instructive. In 1888, incumbent Democratic President Grover Cleveland lost reelection despite receiving a popular vote plurality. He won this plurality because he won by very large margins in the overwhelmingly Democratic South. He won Texas alone by 146,461 votes, for instance, whereas his national popular vote margin was only 94,530. Altogether he won in six southern states with margins greater than 30 percent, while only tiny Vermont delivered a victory percentage of that size for Republican Benjamin Harrison.

    In other words, the Electoral College ensures that winning supermajorities in one region of the country is not sufficient to win the White House. After the Civil War, and especially after the end of Reconstruction, that meant that the Democratic Party had to appeal to interests outside the South to earn a majority in the Electoral College. And indeed, when Grover Cleveland ran again for president four years later in 1892, although he won by a smaller percentage of the popular vote, he won a resounding Electoral College majority by picking up New York, Illinois, Indiana, Wisconsin, and California in addition to winning the South.

    Whether we see it or not today, the Electoral College continues to push parties and presidential candidates to build broad coalitions. Critics say that swing states get too much attention, leaving voters in so-called safe states feeling left out. But the legitimacy of a political party rests on all of those safe states—on places that the party has already won over, allowing it to reach farther out. In 2000, for instance, George W. Bush needed every state that he won—not just Florida—to become president. Of course, the Electoral College does put a premium on the states in which the parties are most evenly divided. But would it really be better if the path to the presidency primarily meant driving up the vote total in the deepest red or deepest blue states?

    Also, swing states are the states most likely to have divided government. And if divided government is good for anything, it is accountability. So with the Electoral College system, when we do wind up with a razor-thin margin in an election, it is likely to happen in a state where both parties hold some power, rather than in a state controlled by one party.

    ***

    Despite these benefits of the current system, opponents of the Electoral College maintain that it is unseemly for a candidate to win without receiving the most popular votes. As Hillary Clinton put it in 2000: “In a democracy, we should respect the will of the people, and to me, that means it’s time to do away with the Electoral College.” Yet similar systems prevail around the world. In parliamentary systems, including Canada, Israel, and the United Kingdom, prime ministers are elected by the legislature. This happens in Germany and India as well, which also have presidents who are elected by something similar to an electoral college. In none of these democratic systems is the national popular vote decisive.

    More to the point, in our own political tradition, what matters most about every legislative body, from our state legislatures to the House of Representatives and the Senate, is which party holds the majority. That party elects the leadership and sets the agenda. In none of these representative chambers does the aggregate popular vote determine who is in charge. What matters is winning districts or states.

    Nevertheless, there is a clamor of voices calling for an end to the Electoral College. Former Attorney General Eric Holder has declared it “a vestige of the past,” and Washington Governor Jay Inslee has labeled it an “archaic relic of a bygone age.” Almost as one, the current myriad of Democratic presidential hopefuls have called for abolishing the Electoral College.

    Few if any of these Democrats likely realize how similar their party’s position is to what it was in the late nineteenth century, with California representing today what the South was for their forebears. The Golden State accounted for 10.4 percent of presidential votes cast in 2016, while the southern states (from South Carolina down to Florida and across to Texas) accounted for 10.6 percent of presidential votes cast in 1888. Grover Cleveland won those southern states by nearly 39 percent, while Hillary Clinton won California by 30 percent. But rather than following Cleveland’s example of building a broader national coalition that could win in the Electoral College, today’s Democrats would rather simply change the rules.

    ***

    Anti-Electoral College amendments with bipartisan support in the 1950s and 1970s failed to receive the two-thirds votes in Congress they needed in order to be sent to the states for consideration. Likewise today, partisan amendments will not make it through Congress. Nor, if they did, could they win ratification among the states.

    But there is a serious threat to the Electoral College. Until recently, it has gone mostly unnoticed, as it has made its way through various state legislatures. If it works according to its supporters’ intent, it would nullify the Electoral College by creating a de facto direct election for president.

    The National Popular Vote Interstate Compact, or NPV, takes advantage of the flexibility granted to state legislatures in the Constitution: “Each State shall appoint, in such Manner as the Legislature thereof may direct, a Number of Electors.” The original intent of this was to allow state legislators to determine how best to represent their state in presidential elections. The electors represent the state—not just the legislature—even though the latter has power to direct the manner of appointment. By contrast, NPV supporters argue that this power allows state legislatures to ignore their state’s voters and appoint electors based on the national popular vote. This is what the compact would require states to do.

    Of course, no state would do this unilaterally, so NPV has a “trigger”: it only takes effect if adopted by enough states to control 270 electoral votes—in other words, a majority that would control the outcome of presidential elections. So far, 14 states and the District of Columbia have signed on, with a total of 189 electoral votes.

    Until this year, every state that had joined NPV was heavily Democratic: California, Connecticut, Hawaii, Illinois, Maryland, Massachusetts, New Jersey, New York, Rhode Island, Vermont, and Washington. The NPV campaign has struggled to win other Democratic states: Delaware only adopted it this year and it still has not passed in Oregon (though it may soon). Following the 2018 election, Democrats came into control of both the legislatures and the governorships in the purple states of Colorado and New Mexico, which have subsequently joined NPV.

    NPV would have the same effect as abolishing the Electoral College. Fraud in one state would affect every state, and the only way to deal with it would be to give more power to the federal government. Elections that are especially close would require nationwide recounts. Candidates could win based on intense support from a narrow region or from big cities. NPV also carries its own unique risks: despite its name, the plan cannot actually create a national popular vote. Each state would still—at least for the time being—run its own elections. This means a patchwork of rules for everything from which candidates are on the ballot to how disputes are settled. NPV would also reward states with lax election laws—the higher the turnout, legal or not, the more power for that state. Finally, each NPV state would certify its own “national” vote total. But what would happen when there are charges of skullduggery? Would states really trust, with no power to verify, other state’s returns?

    Uncertainty and litigation would likely follow. In fact, NPV is probably unconstitutional. For one thing, it ignores the Article I, Section 10 requirement that interstate compacts receive congressional consent. There is also the fact that the structure of the Electoral College clause of the Constitution implies there is some limit on the power of state legislatures to ignore the will of their state’s people.

    One danger of all these attacks on the Electoral College is, of course, that we lose the state-by-state system designed by the Framers and its protections against regionalism and fraud. This would alter our politics in some obvious ways—shifting power toward urban centers, for example—but also in ways we cannot know in advance. Would an increase in presidents who win by small pluralities lead to a rise of splinter parties and spoiler candidates? Would fears of election fraud in places like Chicago and Broward County lead to demands for greater federal control over elections?

    The more fundamental danger is that these attacks undermine the Constitution as a whole. Arguments that the Constitution is outmoded and that democracy is an end in itself are arguments that can just as easily be turned against any of the constitutional checks and balances that have preserved free government in America for well over two centuries. The measure of our fundamental law is not whether it actualizes the general will—that was the point of the French Revolution, not the American. The measure of our Constitution is whether it is effective at encouraging just, stable, and free government—government that protects the rights of its citizens.

    The Electoral College is effective at doing this. We need to preserve it, and we need to help our fellow Americans understand why it matters.

    Trent England

    https://imprimis.hillsdale.edu/danger-attacks-electoral-college/

    https://en.wikipedia.org/wiki/1992_United_States_presidential_election

    J'aime

  4. jcdurbant dit :

    WHAT MOVING OF THE GOALPOSTS AND WHAT SORE LOSERS ?

    Progressive candidates and new Democratic representatives have offered lots of radical new proposals lately about voting and voters. They include scrapping the 215-year-old Electoral College. Progressives also talk of extending the vote to 16- or 17-year-olds and ex-felons. They wish to further relax requirements for voter identification, same-day registration and voting, and undocumented immigrants voting in local elections. The 2016 victory of Donald Trump shocked the left. It was entirely unexpected, given that experts had all but assured a Hillary Clinton landslide. Worse still for those on the left, Trump, like George W. Bush in 2000 and three earlier winning presidential candidates, lost the popular vote. (…) The Electoral College was designed in part to ensure that candidates at least visited the small and often rural states of America. The generation of the Founding Fathers did not want elections to rest solely with larger urban populations. The Electoral College balances out the popular vote. The founders were also terrified of radical democracies of the past, especially their frenzied tendencies to adopt mob-like tactics. In response, the Electoral College was designed to discourage crowded fields of all sorts of fringe presidential candidates in which the eventual winner might win only a small plurality of the popular vote. (…) A cynic might suggest that had Hillary Clinton actually won the 2016 Electoral College vote but lost the popular vote to Trump, progressives would now be praising our long-established system of voting. Had current undocumented immigrants proved as conservative as past waves of legal immigrants from Hungary and Cuba, progressives would now likely wish to close the southern border and perhaps even build a wall. If same-day registration and voting meant that millions of new conservatives without voter IDs were suddenly showing their Trump support at the polls, progressives would insist on bringing back old laws that required voters to have previously registered and to show valid identification at voting precincts. If felons or 16-year-old kids polled conservative, then certainly there would be no progressive push to let members of these groups vote. Expanding and changing the present voter base and altering how we vote is mostly about power, not principles. Without these radical changes, a majority of American voters, in traditional and time-honored elections, will likely not vote for the unpopular progressive agenda.

    VDH

    https://www.nationalreview.com/2019/03/progressive-election-rule-changes-power-over-princples/

    J'aime

Répondre

Entrez vos coordonnées ci-dessous ou cliquez sur une icône pour vous connecter:

Logo WordPress.com

Vous commentez à l'aide de votre compte WordPress.com. Déconnexion /  Changer )

Photo Google

Vous commentez à l'aide de votre compte Google. Déconnexion /  Changer )

Image Twitter

Vous commentez à l'aide de votre compte Twitter. Déconnexion /  Changer )

Photo Facebook

Vous commentez à l'aide de votre compte Facebook. Déconnexion /  Changer )

Connexion à %s

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur la façon dont les données de vos commentaires sont traitées.

%d blogueurs aiment cette page :