|
|
The Rise of Robopolling in California
in 2010 and Its Implications
by Mark DiCamillo
When comparing the polling in California this year to previous years, two
things stand out. The first is the sheer number of pre-election polls
conducted and reported. When reviewing only polls conducted in California’s
general election races for governor and U.S. Senate, by my count there were
at least 75 different statewide public polls completed by 14 different
polling organizations. And, this doesn’t even include the many private polls
conducted in each contest for the various political candidates and
campaigns.
The other thing that jumps out is that more of the polls than any year
previously were robopolls, also referred to as Interactive Voice Response
(IVR) polls.
By my count nearly half of all of the statewide public polls reported in
California’s general election in the governor and Senate races were
robopolls. If you were to also include polls conducted in local election
contests across the state, robopolls constituted the majority of all public
polls in California in this year’s general election.
Because robopolls are now so prevalent, it is more important than ever for
the media and the public to understand just how these polls differ from
traditional telephone polls, especially those conducted by the state’s three
leading public polls: The Field Poll, the Public Policy Institute of
California, and The Los
Angeles Times/USC Poll.
Comparing the Methodologies of Robopolls to Traditional Telephone Polls
The basic survey approach of robopolls is to contact people by telephone
using the recorded voice of a professional announcer. The announcer
instructs those answering the phone to use the keypad on their phones to
answer their poll questions. Traditional polls use live interviewers to call
voters who ask each question directly.
Because most of the costs of conducting a traditional telephone survey are
derived from the time spent and wages paid to telephone interviewers and
their supervisors when carrying out data collection, robopolls are much
cheaper to conduct than traditional telephone surveys
since there are essentially no interviewer costs associated with conducting
these polls.
This is one of the reasons why they are now so prolific. They can be
conducted at a fraction of the costs of conducting a traditional phone
survey.
However, other than cost, robopolls differ from traditional telephone
surveys in a number of important ways.
(1) Short polling period, no callbacks.
Most robopolls are conducted very quickly, over a one day period. They
typically make only one attempt to reach a voter at each number dialed. If
no one answers the phone they do not make callbacks to that number but
simply replace it with a new telephone listing.
By definition, this means that robopolls have significantly lower response
rates than traditional polls and are polling only those segments of the
voting public that are the easiest to reach.
Contrast this to a Field, PPIC or Times/USC poll which is typically
conducted over a one-week period and which makes up to six to eight
different attempts at each usable number to try to bring voters into the
sample. While this is more costly and time consuming, it produces samples
that more closely capture the varied demography of California’s voters --
working and non-working, old and young, white non-Hispanic and ethnic, those
living alone and those living in multi-family households.
(2) Limited knowledge about who is actually answering their questions.
Robopolls make calls from a random digit dial sample of all possible
residential landline telephone numbers within the political jurisdiction
they are polling. The recorded announcer instructs the person answering to
tell them if they are a registered voter, leaving this important selection
criterion totally in the hands of the respondent.
The Field Poll and the Times/USC Poll sample voters off of lists derived
from the state’s official voter registration rolls, as did virtually every
private poll conducted for a political campaign in California in 2010. This
gives the poll a number of advantages. First, it enables interviewers to ask
to speak to a specific individual by name and if that individual is not
available, the interviewer can make an appointment to call back that voter
at a later time. Also, because the sample of names is derived from lists of
known voters, we know by definition that the person we are seeking is indeed
a registered voter. Working off a voter list also provides the pollster with
the voter’s actual party registration, as well as his or her
frequency of voting
in past elections, since this information is contained on the official
voting records. This information can also be used to ensure that the sample
is aligned properly to the state’s actual party registration and in
identifying which voters are most likely to vote.
(3) Exclusion of cell phones.
By law, the automated dialing devices used by the robopolls are not
allowed to call cell phones. Traditional telephone polls routinely dial cell
phones by hand to include them into their samples. Since more than 20% of
all California voters are now cell phone-only households and cannot be
reached when dialing a random sample of landline phone listings, most
robopollsters are systematically excluding these voters from their samples.
(4) Language limitations.
To my knowledge, the pre-recorded messages of most robopolls are in
English only. This excludes from their samples the additional set of voters
who do not understand spoken English. By contrast, Field, PPIC and the
Times/USC polls routinely conduct all of their statewide polls in English
and Spanish. In addition, Field’s final pre-election poll this year was
extended further to include four other Asian languages and dialects:
Cantonese, Mandarin, Korean and Vietnamese.
We estimate that 7%-10% of all registered voters in California would either
prefer or require non-English language interviewing when completing a
telephone survey, so this portion of the state’s fast-growing ethnic voters
is under-represented by the robopolls.
(5) The need to construct a model and apply larger weighting adjustments.
Each of these factors means that the quality of the raw unadjusted
survey data derived from robopolls is significantly lower than that of
traditional telephone polls, like those conducted by Field, PPIC and the
Times/USC.
In their methodological descriptions robopollsters admit that women are much
more likely to participate in their surveys than men, and that older voters
are included in their samples in far greater numbers than young or middle
age voters. Because their initial data are less representative, robopolls
need to make fairly major adjustments to their raw data to bring their
samples into balance with the characteristics of the larger voting
population.
By contrast, the unadjusted raw data obtained by traditional telephone
pollsters more closely reflect the actual population of voters they are
polling. While The Field Poll does make weighting adjustments to its
samples, the adjustments tend to be small and have a modest impact on the
overall statewide findings.
For example, The Field Poll’s final pre-election poll this year showed
Democrats Jerry Brown and Barbara Boxer ahead of their Republican opponents
in the races for governor and U.S. Senate by eight
to ten percentage points in both our unweighted and weighted
samples. The main impact of the weighting or sample adjustments was to align
the various subgroups to known characteristics of the voter population.
Importantly, they did not have much impact or significantly alter the
overall statewide preference distributions initially found in the survey.
Despite their sampling drawbacks, the better robopollsters are able to
transform the survey information they obtain into reasonable pre-election
poll estimates by developing a sophisticated model of the probable
electorate and adjusting their sample to conform to its characteristics.
Because of this, I view the better robopollsters more as skilled modelers of
the electorate than as high quality survey researchers.
But because of their need to construct models to determine the overall shape
of the probable electorate rather than rely on actual survey data or
information about each respondent’s voting record to determine this, the
modeling itself can create potential problems.
For example, the determination of how many Democrats and Republicans to
include in a sample is closely tied to voting preferences. When making this
determination the robopollster takes great liberties in deciding who is
ahead and by how much, since even a slight change in the partisan
distribution of the sample will affect the preference distributions in most
election contests.
This is perhaps the most worrisome aspect of the robopoll method, since it
confers upon robopollsters greater latitude in influencing the outcomes of
their poll measures, and risks introducing systematic biases into their
polls.
Some robopollsters admit to taking into account a state’s voting history,
national trends and recent polling to construct their partisan weighting
targets. This means that the proportions of Democrats and Republicans
allocated to their sample are derived from subjective judgments about the
historical and prevailing political conditions in a given state and from
other polls already conducted in that political jurisdiction.
It would be revealing to be able to compare a robopoll’s unadjusted and
adjusted poll distributions in their pre-election preference measures. I
suspect that if this information were available, it would reveal wide
differences between the two estimates.
Because robopolls make subjective judgments when establishing their
estimates of the composition of the likely electorate, this method can
easily produce an entire array of different possible survey results. It is
left to the robopollster to choose which result or political reality fits
their expectations at that moment in time. This is not only dangerous, it
has the long-term effect of undermining public confidence in the objectivity
of the entire public opinion polling process.
Concerns About the Future of Polling
As more pre-election polls employ the robopolling method, my fear is that
they will crowd out the other higher quality polls that are being conducted,
leaving the media and the public with a sometimes confusing array of
pre-election poll estimates to sort through.
This is not to say that better robopollsters are manipulating their poll
results for their own ends. Some are trying to make up for the deficiencies
in their initial survey samples. For example, at least one robopollster
extended their data collection over a longer three-day period and
experimented with the use of live interviewers to call separate samples of
cell phone listings in an attempt to fill the gap of voters only reachable
by cell phone.
Yet the other problems inherent in their survey approach remain. This is why
I continue to view the results of most of them cautiously.
A Final Aside: Polling on Proposition 19
One other controversy between robopolls and traditional telephone polls
surfaced this year in California during the Proposition 19 marijuana
legalization initiative campaign.
When polling this year on Prop. 19, the traditional telephone pollsters
fielded a number of inquiries from reporters and others questioning the
reliability of live interviewer telephone polls conducted on a controversial
topic like marijuana. The theory they presented was that because
robopollsters avoid direct human interactions when conducting their polls,
voters felt less constrained about admitting their true opinions on Prop.
19.
Most of the literature regarding interviewer effects on research on
sensitive topics, like marijuana, relates to people being asked about their
own personal behaviors that might be embarrassing or socially undesirable.
This, in my opinion, doesn’t apply when polling on a policy issue like Prop.
19, which simply asks voters their opinions about an initiative to legalize
marijuana’s sale and use.
At the time, I challenged those questioning the accuracy of live interviewer
polls on the topic to revisit the issue after the election. Well, the
results are in and the live interviewer polls like Field, PPIC and the
Times/USC polls were generally closer to the final vote on Prop. 19 than the robopolls.
In their final pre-election surveys the state’s three leading traditional
telephone polls showed Prop. 19 trailing by an average of 8 percentage
points. By contrast, the average of the two final pre-election robopolls
conducted in California showed Prop. 19 trailing by just 4.5 percentage
points. According to the California Secretary of State, with nearly nine
million votes counted and more than one million votes yet to be counted,
California voters were rejecting Prop. 19 by eight percentage points, 54% to
46%.
I hope this puts that theory to rest.
Copyright © 2010 POLLING REPORT, INC.
|
Because robopolls are now so prevalent, it is more
important than ever for the media and the public to understand just how
these polls differ from traditional telephone polls . . . .
By law, the automated dialing devices used by the robopolls are not
allowed to call cell phones.
. . . The Field Poll’s final pre-election poll this year showed Democrats
Jerry Brown and Barbara Boxer ahead of their Republican opponents in
the races for governor and U.S. Senate by eight
to ten percentage points in both our unweighted
and weighted
samples.
The theory they presented was that because robopollsters avoid direct
human interactions when conducting their polls, voters felt less
constrained about admitting their true opinions on Prop. 19.
|