All the second guessers are asking how so many polls got it so wrong. The political class doesn’t want to hear it, but the answer is fairly simple. First most of the pollsters are using methodologies that are fraudulent. Â
The most common is some form of quota sampling. Â Within Federal agencies and many state agencies, this method is prohibited for any data collection that may influence public policy. Â How do I know this? — because I helped write the regulation that enforced the prohibition. Â Why is it prohibited? — 1) it lacks the attributes that give scientific sampling credibility and 2) it is vulnerable to gross manipulation.
If you examine the theory that supports things like estimating error in estimates from a sample, you will find that these quasi-random sampling methods do not satisfy the necessary conditions for estimating an unbiased mean or a valid sampling variance. Â The attributes of scientific sampling that put bounds on the error in a estimate are not present in quota sampling. Â So those claims that the error in the result is say plus or minus 3 percent (with 95 percent confidence) are false — that calculation is only valid with rigorous sampling methods.
Empirical studies have shown that the error in common forms of quasi-random sampling is often 2-3 times larger — and it is not symmetric (e.g. plus or minus 9 percent) — it is usually dominated by bias rather than sampling variance and so departs from the actual mean in one direction.
One tell-tale that these unreliable methods are in play is the need for demographic or party affiliation or gender “weighting.” Â In a pure “random sample” that is large enough (and by this I also mean stratified, clustered, and other complex designs), various subgroups in the target population are automatically represented in roughly the correct proportions — no need for post-stratification, imputation, or other band-aids for shortcomings of the sampling process.
So these polls that “oversample” or have a higher weight for democrats display nothing more than a bias injected by the designer.
The bias has been so embarrassing that even the media hacks often fall back on the Real Clear Politics average of poll results. Â Do you know what you get when you average a bunch of polls biased in the same direction (?) — you get a biased average that looks less biased than the most ridiculous results. Â (What the averaging process is trying to do ONLY works when the polls contain some even mix of biases in opposite directions).
The other major problem that generates error in opinion polls is the accuracy of the opinion measure. This is due to the respondent’s internal metric on his or her opinion, and may extend to lying to the pollster by those who distrust the interviewer or the process. Â Two people who objectively have nearly identical opinions may describe those opinions differently or even lie about them.
There were three polls that got it right in the 2016 election cycle —
The LA Times Daily Poll (a rotating panel) that used a 100 point scale to rationalize each respondent’s opinion permitting a more uniform measure and a way of subtly including changes over time. Â This poll said Trump wins by about 5 points.
The Investors Business Daily poll emphasized rigorous sampling methods and some proprietary internal models. Â They had Trump ahead by 2 percent.
The third set of polls were by a man named Cahaly of Trafalgar Group. Â This organization focused on battleground states and got most of them right (for Trump) except for Michigan where the difference was a fraction of one percent. Â Cahaly used a question about the views of neighbors to flag respondents that might be hedging their own preference. Â (This is a proven method used in sensitive surveys.)
An academic acquaintance of mine suggested that these polls probably had a small sample bias in favor of Trump and thus overestimated his popular vote. Â Possible, but there is a much simpler explanation.
Some of the things these polls cannot measure are —
1) the number of dead people voting; 2) the number of duplicate absentees voting; and Â Â 3) the number of illegal aliens voting. Â (Not to mention the Soros voting machines in 15 states and the widespread use of “fractional” counting procedures in voting machine software — vote counts should be integers — why convert to decimals?)
The Census Bureau learned long ago that illegals will not report in the Census in spite of penalties imposed by law. Â And neither the LA Times nor IBD have found a way to reach the fake dead voters and fake absentees by phone. Â
So if we could define a rigorous sampling estimate, we could get a handle on the rough magnitude of net Democrat voter fraud. Â Don’t laugh — the Census Bureau conducts a large expensive sample survey called the Post-Enumeration Survey to get a handle on the level of error in the complete enumeration every ten years.
So why were the polls so far off —
They were deliberately biased to 1) shape rather than measure public opinion; and 2) to provide cover for the magnitude of voter fraud.
Let’s see — using the smaller IBD number, 2% + Hillary’s margin of 0.3 percent of real votes suggests about 2.7 million fraudulent votes were generated by Soros and the democrats — no longer insignificant. Â Some of the most biased polls and blatant fraud were here in Virginia. Â Come to think of it, was Gore’s popular vote “advantage” in 2000 real or fraudulent? Â This “factoid” stirred Soros-funded street gangs for a decade or more.
Draining this particular swamp should be a priority in the new administration. Â There is adequate precedent in the voting rights laws that put many southern states under Justice Department supervision for their history of voting rights violations. Â Let’s put states that support “sanctuary” cities and won’t use voter ID under the microscope until they clean up their act.