Skip Nav

Survey Research and Questionnaires

This article is a part of the guide:

❶However, some respondents may have had a fight with their spouse the evening prior to the survey, while other respondents' spouses may have cooked the respondent's favorite meal. Share this page on your website:

Our survey methodology in detail

Navigation menu
According to Instrumentation
The dataset

The sampling procedure itself. Since surveys are based on samples and not the entire population, there is a certain amount of random sampling error in any survey.

Certain unavoidable systematic errors--the NES does not interview in Alaska and Hawaii, for example;. Refusals to cooperate with the survey by potential respondents. As the number of surveys and polls has increased in recent years, respondents have displayed what has been called survey fatigue, and non-response has increased over time;. Lack of candor by respondents on questions that have socially acceptable answers Did you vote in the election?

The inability of respondents to remember past behaviors Who did you vote for in the last presidential election? Respondents misconstruing survey questions as exams and so providing answers to questions that they really have not thought much about non-attitudes ;.

Badly trained interviewers who might give respondents cues as to how to answer questions, or who misrecord respondents' answers, or who falsify data, and so on;. Where these and other errors are random in nature, they are annoying, but when the errors are systematic they can cause great trouble.

We can reduce systematic error in a survey through proper training of interviewers and adherence to proper norms of developing and conducting surveys, such as those developed by the American Association for Public Opinion Research AAPOR. It is important to be aware of the potential sources of error when working with survey data.

Small differences in percentages may be the result of error and so be virtually meaningless. The National Election Study was conducted entirely face-to-face. Great care was taken in identifying the sample, training interviewers, and in the actual conducting of the interviews. Each interviewer was given a laptop computer that had survey questionnaire software pre-installed so that the interviewer could enter the respondent's data as the interview proceeded.

The care that NES takes in conducting surveys results in data of a very high quality, but it also is expensive. Face-to-face surveys are more expensive to conduct than are telephone surveys since face-to-face surveys require interviewers to be sent out into the field while telephone interviewers can all sit in one room. To insure that face-to-face interviews are high quality, the field interviewers must be very highly trained since there is little supervision when they are out of the office.

Many researchers feel that face-to-face interviews yield "richer" data; the interviewer can ask a variety of follow-up questions, can spend adequate time making the respondent feel comfortable in answering sensitive questions, and can note any special circumstances about the interview that might have affected the answers given. The sample was drawn by an area probability method that relies on US Census figures and maps of the country. All respondents were interviewed in the fall of before and after the election.

Some 1, people were interviewed before the election and 1, of these were successfully interviewed after the election. The dataset for this instructional package includes only the 1, respondents who were interviewed both before and after the election. The data for this instructional module are weighted. Weighting a dataset is a technical procedure to correct the data for several basic factors. The goal of weighting is to produce a dataset that is more demographically representative of the population.

When the data are weighted, some respondents count for more than one person they have a sample weight greater than 1. Some small anomalies that you may occasionally notice will be the result of working with weighted data.

This point is discussed more in later sections. Please enable JavaScript in your browser. JavasScript is required to use the core functionality of this site including searching, downloading data, and depositing data. Survey Research Methods The study of voting behavior generally relies on information from sample surveys.

Codebook In order to use a dataset, a codebook is needed. Survey sampling Many people ask how it is possible to make any generalizations about the American public on the basis of a survey sample of about 1, individuals. We can reduce error in a survey through good sampling. Simple random samples are impractical in national surveys for two main reasons: There is no national list of American adults; The sample would be scattered all over the US, making it very expensive to conduct face-to-face interviews.

The cellphone sample is drawn through systematic sampling from dedicated wireless banks of contiguous numbers and shared service banks with no directory-listed landline numbers to ensure that the cellphone sample does not include banks that are also included in the landline sample. The sample is designed to be representative both geographically and by large and small wireless carriers also see cellphones for more information.

Both the landline and cell samples are released for interviewing in replicates, which are small random samples of each larger sample. Using replicates to control the release of telephone numbers ensures that the complete call procedures are followed for all numbers dialed. The use of replicates also improves the overall representativeness of the survey by helping to ensure that the regional distribution of numbers called is appropriate.

This method of selecting respondents within each household improves participation among young people who are often more difficult to interview than older people because of their lifestyles. Unlike a landline phone, a cellphone is assumed in Pew Research Center polls to be a personal device. Interviewers ask if the person who answers the cellphone is 18 years of age or older to determine if the person is eligible to complete the survey also see cellphone surveys for more information.

This means that, for those in the cell sample, no effort is made to give other household members a chance to be interviewed. Although some people share cellphones, it is still uncertain whether the benefits of sampling among the users of a shared cellphone outweigh the disadvantages. Sampling error results from collecting data from some, rather than all, members of the population.

For each of our surveys, we report a margin of sampling error for the total sample and usually for key subgroups analyzed in the report e. For example, the sampling error for a typical Pew Research Center national survey of 1, completed interviews is plus or minus 2. This means that in 95 out of every samples of the same size and type, the results we obtain would vary by no more than plus or minus 2. Thus, the chances are very high 95 out of that any sample we draw will be within 3 points of the true population value.

The sampling errors we report also take into account the effect of weighting. Also see probability and non-probability sampling for more information.

At least seven attempts are made to complete an interview at every sampled telephone number. The calls are staggered over times of day and days of the week including at least one daytime call to maximize the chances of making contact with a potential respondent. Interviewing is also spread as evenly as possible across the field period.

The response rate is the percentage of known or assumed residential households for which a completed interview was obtained. Fortunately, low response rates are not necessarily an indication of nonresponse bias, as we discuss in the problem of declining response rates. In addition to the response rate, we sometimes report the contact rate, cooperation rate or the completion rate for a survey. The contact rate is the proportion of working numbers where a request for an interview was made.

The cooperation rate is the proportion of contacted numbers where someone gave initial consent to be interviewed. The completion rate is the proportion of initially cooperating and eligible households where someone completed the interview. Nonresponse in telephone interview surveys can produce biases in survey-derived estimates. Survey participation tends to vary for different subgroups of the population, and these subgroups are likely to also vary on questions of substantive interest.

To compensate for these known biases, the sample data are weighted for analysis.

Pagination

Main Topics

Privacy Policy

This lesson explores the ways a researcher may employ the types of surveys used in research. We will also go over the strengths and weaknesses of.

Privacy FAQs

Survey research is one of the most important areas of measurement in applied social research. The broad area of survey research encompasses any measurement procedures that involve asking questions of .

About Our Ads

In survey research, independent and dependent variables are used to define the scope of study, but cannot be explicitly controlled by the researcher. Before conducting the survey. Survey research is a commonly used method of collecting information about a population of interest. There are many different types of surveys, several ways .

Cookie Info

The essence of survey method can be explained as “questioning individuals on a topic or topics and then describing their responses” (Jackson, , p). WINTER 4, UCLA Principles of Survey Methodology Labor & Workplace Studies -3 and Chicano Studies Applied Research Methods in the LA.