|What is a survey?|
|When to use a survey|
|Data Collection method|
|Sources of error|
|Bias and accuracy|
|The role of the Office of the Government Statistician (OGS)|
|Where can I get more help?|
What is a survey?
Surveys provide a means of measuring a population’s characteristics, self-reported and observed behaviour, awareness of programs, attitudes or opinions, and needs. Repeating surveys at regular intervals can assist in the measurement of changes over time. These types of information are invaluable in planning and evaluating Government policies and programs.
Unlike a census, where all members of a population are studied, sample surveys gather information from only a portion of a population of interest. The size of the sample depends on the purpose of the study.
In a statistically valid survey, the sample is objectively chosen so that each member of the population will have a known non-zero chance of selection. Only then can the results be reliably projected from the sample to the population. The sample should not be selected haphazardly or only from those who volunteer to participate.
When to use a survey
When determining the need for a survey, departments/agencies should first check that the required information is not already available (for example, conduct library searches, refer to the Office of Economic and Statistical Research).
The option of collecting the required information using existing administrative records should also be explored. Using existing data or records provides considerable advantages in terms of cost, time and the absence of respondent burden. The major disadvantage is the lack of control over the data collected. If existing data is not available or suitable, a number of factors must then be considered when determining which type of survey, if any, is appropriate. For example:
- Can the information be collected cost effectively and accurately via a survey?
- How complex and how sensitive is the topic?
- Do respondents have access to the required information?
- Will they be willing to supply the information?
- Will their responses to the questions be valid?
- Are the necessary financial, staff, computer or other resources available?
- When is the information required?
- Is enough time available to ensure that data of sufficient quality can be collected and analysed?
- When is the best time to conduct the survey? (For example, need to allow for seasonality, impact of school holiday periods etc).
- Do you want to use this information to target program improvements? If so, you may need to identify the key sub-groups you wish to report on (for example, geographic areas, age groups, sex, industry and size of business) and obtain sufficient responses for each group to ensure results are accurate enough for your needs.
- What level of error can be tolerated? This depends on how and for what purposes you intend to use the survey results.
Is the survey to be repeated? How often?
- Does the department/agency have authority to collect the information through either a compulsory or voluntary survey? For example, the Statistical Returns Act 1896 empowers the (Queensland) Government Statistician to collect statistical information on a wide range of matters on a compulsory basis.
Ethical considerations must be observed during the survey exercise. This includes that data, where appropriate, is treated confidentially, and that where information is sought on the understanding that the respondent cannot be identified, that such anonymity is preserved. Other ethical considerations include:
- Do you need identifiable information (for example, names, addresses, telephone numbers) relating to respondents for follow-up research or matching with other data? If so, you need to clearly explain why you need such details and obtain their consent.
- Will respondents be adversely affected or harmed as a direct result of participating in the survey?
- Are procedures in place for respondents to check the identity and bona fides of the researchers?
- Is the survey being conducted on a voluntary basis? If so, respondents must not be misled to believe it is compulsory when being asked for their co-operation.
- Is it necessary to interview children under 14 years? If so, the consent of their parents / guardians / responsible adults must be obtained.
These factors must all be taken into consideration when developing an appropriate sample design (that is, sample size, selection method, etc.) and survey method. Depending on the expertise within the department/agency, this may be done with advice or assistance from the Office of the Government Statistician or a consultant.
The following is an outline of the general process to be followed once the need for a survey has been determined. Some steps will not be necessary in all cases and some processes can be carried out at the same time (for example, data collection and preparation for data entry and processing).
A sample survey is cheaper and more timely than a census but still requires significant resources, effort and time. The survey process is complex and the stages are not necessarily sequential. Pilot testing of, at least, key elements such as the questionnaire and survey operations is an essential part of the development stage. It may be necessary to go through more than one cycle of development, testing, evaluation and modification before a satisfactory solution is reached.
The entire process should be planned ahead, including all critical dates. It is always beneficial to approach the Office of the Government Statistician (OGS) or prospective consultants as early as possible during this planning stage.
The time required from initial planning to the completion of a report or publication may vary from several weeks to several months according to the size and type of survey.
Key steps in the survey process include:
Planning and designing
1. Define the purpose, objectives and the output required. Experience has shown that well-defined output requirements at the outset minimise the risk of the survey producing invalid results.
2. Design collection methodology and sample selection method.
3. Develop survey procedures. Design and print test questionnaires and any other documentation (for example, instructions for interviewers and introductory letters).
Testing and modifying
4. Pilot test all aspects of the survey if possible. As a minimum, a small-scale pre-test of questionnaires can reveal problems with question wording, layout, understanding or respondent reaction.
5. Analyse test results (completed questionnaires, response/consent rate etc). Obtain feedback from respondents and/or interviewers.
6. Modify procedures, questionnaires and documentation according to test evaluation.
7. Repeat steps 1–6 if necessary.
Conducting the survey
8. Finalise procedures, questionnaires and documentation.
9. Select sample.
10. Train interviewers (if interviewer-based).
11. Conduct the survey (that is, mail out questionnaires or commence interviewing) including follow-up of refusals and non-contacts, supervision and checks of interviewers’ work.
Processing and analysing
12. Prepare data entry, estimation and tabulation systems.
13. Code, enter and edit data.
14. Process data—calculate population estimates and standard errors, prepare tables.
15. Prepare report of survey results.
16. Prepare technical report. Evaluate and document all aspects of the survey for use when designing future surveys.
Data Collection Method
Commonly used methods for collecting quantitative data include telephone and face-to-face interviews, self-completion questionnaires (such as mail, email, web-based or SMS) or combinations of these.
Each has advantages and disadvantages in terms of the cost, time, response/consent rate and the type of information that can be collected.Self-completion surveys via mail, email, the internet or SMS are generally the least expensive, particularly for a widespread sample. They allow respondents time to consider their answers, refer to records or consult with others (which can be helpful or unhelpful, depending on the survey’s objectives). They also eliminate interviewer errors and reduce the incidence of selected people (or units) being unable to be contacted.
A major disadvantage of self-completion surveys is the potentially high non-response. In such cases, substantial bias can result if people who do not complete the survey have different characteristics from those who do. However, response can be improved using techniques such as well-written introductory letters, incentives for timely return of questionnaires and follow-up for those initially not responding.
In self-completion surveys there is no opportunity to clarify answers or supplement the survey with observational data. In mail surveys the questionnaire usually has to be simple and reasonably short, particularly when surveying the general community. Internet and email-based surveys are commonly used for surveying clients or staff within organisations and allow more complex questionnaires to be used than mail surveys do.
Interviewer-based surveys such as face-to-face or telephone surveys generally allow more data to be gathered than self-completion surveys and can include the use of more complex questionnaires.
Interviewers can reduce non-response by answering respondents’ queries or concerns. They can often pick up and resolve respondent errors.
Face-to-face surveys are usually more expensive than other methodologies. Poor interviewers can introduce additional errors and, in some cases, the face-to-face approach is unsuitable for sensitive topics.
Telephone surveys are generally cheaper and quicker than face-to-face surveys, and are well suited to situations where timely results are needed. However, non-response may be higher than for face-to-face surveys as it is harder for interviewers to prove their identity, assure confidentiality and establish rapport.
Telephone surveys are not suited for situations where the respondents need to refer to records extensively. Also, the questionnaires must be simpler and shorter than for face-to-face surveys and prompt cards cannot be used.
Computer Assisted Telephone Interviewing (CATI) is a particular type of telephone survey technique that helps to resolve some of the limitations of general telephone-based surveying. With CATI, interviewers use a computer terminal. The questions appear on the computer screen and the interviewers enter responses directly into the computer. The interviewer’s screen is programmed to show questions in the planned order. Interviewers cannot inadvertently omit questions or ask them out of sequence. Online messages warn interviewers if they enter invalid values or unusual values.
Most CATI systems also allow many aspects of survey operations to be automated, e.g. rescheduling of call-backs, engaged numbers and “no answers”, and allow automatic dialling and remote supervision of interviewer/respondent interaction.
A survey frame or list which contains telephone numbers is required to conduct a telephone survey. For general population surveys, such lists are not readily available or they have limitations that can lead to biased results.
If the Electronic White Pages list is used to select a sample of households then the sample will not include households with silent numbers. In addition, it may exclude households with recent new connections or recent changes to existing numbers.
Electoral rolls exclude respondents aged less than 18 years of age, migrants not yet naturalised and others ineligible to vote.
Random Digit Dialling may address some of the under-coverage associated with an Electronic White Pages or electoral role list, but it is inefficient for sampling at a low geographic level and does not allow for communicating (via pre-approach letter, for example) with households prior to the commencement of telephone interviewing.
Research conducted by the Office of the Government Statistician indicates that 10% of Queensland households with a telephone did not have a landline telephone connected and relied solely on using a mobile phone for making and receiving phone calls.
When the Office of the Government Statistician conducts telephone surveys that need to reflect a representative cross-section of the Queensland public, households are randomly selected based on information from databases which are either publicly available or kept by the Office for official statistical purposes under the authority of the Statistical Returns Act. Such databases contain contact telephone information which includes mobile phone numbers.
Combinations of collection methods – such as interviewers dropping off a questionnaire to be mailed back or returning to pick it up, a mail survey with telephone follow-up, or an initial telephone call to obtain cooperation or name of a suitable respondent followed by a mail survey – are sometimes used to obtain higher response/consent rates to a survey.
If in-depth or purely qualitative information is required, alternative research methods should be considered. Focus groups, observation and in-depth interviewing are all useful when developing a survey or initially exploring areas of interest. They can also be a valuable supplement to survey data. However, results from such studies should not be considered representative of the entire population of interest.
Sources of Error
Whether a survey is being conducted by departmental/agency staff or by consultants, it is important to be aware of potential sources of error and strategies to minimise them.
Errors arising in the collection of survey data can be divided into two types—sampling error and non-sampling error.
Sampling Error occurs when data is collected from a sample rather than the entire population. The sampling error associated with survey results for a particular sub-group of interest depends mainly on the number of achieved responses for that sub-group rather than on the percentage of units sampled. Estimates of sampling error, such as standard errors, can be calculated mathematically. They are affected by factors such as:
Sample size—increasing the sample size will decrease the sampling error;
Population variability—a larger sampling error will be present if the items of interest vary greatly within the population;
Sample design—standard errors cannot be calculated if the probability of selection is not known (for example, quota sampling).
All other errors associated with collecting survey data are called non-sampling errors. Although they cannot be measured in the same way as sampling errors, they are just as important.
The following table lists common sources of non-sampling error and some strategies to minimise them.
|Source of Error||Examples||Strategies to minimise error|
|Planning and interpretation||Inadequate definitions of concepts, terms or populations.||Ensure all concepts, terms and populations are defined precisely through consultation between data users and survey designers.|
|Sample selection||Inadequate list from which sample is selected; biased sample selection.||Check list for accuracy, duplicates and missing units; use appropriate selection procedures (see “Bias and Accuracy” below).|
|Survey methods||Inappropriate method (e.g., mail survey for a very complicated topic).||Choose an appropriate method and test thoroughly.|
|Questionnaire||Loaded, misleading or ambiguous questions, poor layout or sequencing.||Use plain English, clear questions and logical layout; test thoroughly.|
|Interviewers||Leading respondents, making assumptions, misunderstanding or misrecording answers.||Provide clear interviewer instructions and appropriate training, including exercises and field supervision.|
|Respondents||Refusals, memory problems, rounding answers, protecting personal interests or integrity.||Promote survey through public media; ensure confidentiality; if interviewer-based, use well-trained, impartial interviewers and probing techniques; if mail-based, use a well-written introductory letter.|
|Processing||Errors in data entry, coding or editing.||Adequately train and supervise processing staff; check a sample of each person’s work.|
|Estimation||Incorrect weighting, errors in calculation of estimates.||Ensure that skilled statisticians undertake estimation.|
If a consultant conducts the survey, departmental/agency staff should have input into the questionnaire design, participate in testing and attend interviewer training and debriefing.
Details of techniques used to minimise non-sampling error should be requested.
Bias and accuracy
Non-response occurs in virtually all surveys through factors such as refusals, non-contact and language difficulties.
It is of particular importance if the characteristics of non-respondents differ from respondents. For example, if high-income earners are more likely to refuse to participate in an income survey, the results will obviously be biased towards lower incomes.
For this reason, all surveys should aim for the maximum possible response/consent rate, within cost and time constraints, by using techniques such as call-backs to non-contacts and follow-up of refusals. The level of non-response should always be measured.
Bias can also arise from inadequate sampling frames, the lists from which respondents are selected. Household and business telephone listings and electoral rolls are often used as sampling frames, but they all have limitations. Telephone listings exclude respondents who do not have telephones and can exclude those with “silent” or unlisted numbers. Electoral rolls exclude respondents aged less than 18 years of age, migrants not yet naturalised and others ineligible to vote.
Once again, if these people are of interest and have different characteristics from those included on the frame, bias will be introduced.
One selection method often used by researchers is quota sampling. Interviewers are instructed to obtain a certain number of interviews, often with respondents in particular categories. For example, 30 interviews with females aged 18 to 25 years and 20 interviews with males aged 18 to 25 years etc. Interviewers may interview anyone fitting these criteria. Unfortunately, people who are most easily contacted or most approachable may have different opinions or behaviour to those not interviewed, introducing potential bias. As each person’s chance of selection is not known, standard errors cannot, strictly speaking, be calculated. Consequently, the accuracy of the survey results cannot be determined.
For this reason, the OGS strongly recommends that probability sampling, where each person or unit has a known non-zero chance of selection, be used in preference to quota sampling. In probability sampling, the sample is selected by objective methods and when properly carried out, there is no risk of bias arising from subjective judgements in the selection of the sample.
If constraints are such that a probability sample is impractical, other research methods—such as focus groups or purposive sampling—should be considered. It must, however, be remembered that results from these procedures cannot be assumed to be representative of a broader population.
Given that a probability sample has been undertaken, standard errors should be calculated to check the accuracy of all results.
It is recommended that estimates with a relative standard error (that is, the standard error divided by the estimate multiplied by 100) which exceed 50% should not be used. Estimates with a relative standard error from 25% to 50% inclusive should be used with caution.
The role of the Office of the Government Statistician (OGS)
The role of the Office of the Government Statistician is to coordinate statistical activities of the Queensland Government and to provide a policy framework for the collection and management of statistics. This includes compliance with Government directives on the management of statistics, the reduction of waste through unnecessary duplication and monitoring the impact of data collection on those who respond to surveys, particularly businesses.
The OGS is part of the Office of Economic and Statistical Research (OESR) which provides a commercial statistical service including assistance to agencies conducting surveys by:
Conducting all or part of the survey
In consultation with departmental/agency staff, OESR can design the survey and questionnaire, pilot test, train interviewers, enumerate, process and analyse the data;
Assisting in selection or evaluation of a consultant
Agencies may prefer to engage specialised consultants or the OESR may not be able to conduct the survey due to other priorities. In these cases the OESR can assist with all aspects of engaging consultants, including assistance with preparing briefs, selecting consultants and evaluating their work; or
Assisting agencies to conduct their own surveys
Budgetary or other constraints may require agencies to conduct their own surveys. The OESR can still provide advice on all aspects of conducting a sample survey.
The following information on preparing briefs may be of particular assistance in the engagement of consultants or contractors to conduct all or part of a survey. Please refer to the “Engaging and Managing Consultants” Guide for information on the general process and requirements for engaging consultants or contractors.
To obtain the highest quality proposals, it is important to provide contenders with the maximum amount of relevant information. Concentrate on clearly stating aims, objectives and requirements.
The following (additional) information should be included in a brief for the conduct of a survey:
Give relevant background details such as previous research and where the survey fits into the department’s/agency’s program.
Outline the specific purpose and objectives of the survey.
Indicate the population and/or sub-populations of interest. For example, all Queensland women aged 18 years or over.
Provide details of the survey content and preferred method (if appropriate). Include a list of data items and output specifications (for example, tables and analyses, including accuracy requirements). Attach a draft questionnaire if prepared. Specify reporting requirements including the calculation of standard errors and details of response/consent rates and techniques used to minimise non-sampling error. Clearly indicate the parts of the project which will be the responsibility of the consultant.
Include dates, such as:
receipt of proposals;
engagement of consultant;
commencement and completion of pilot testing;
commencement and completion of survey; and
presentation of results or report.
Indicate proposal requirements, such as:
details of proposed method;
full breakdown of costs (list categories of interest);
details of interviewers, if appropriate (for example, number, location, training, experience);
details of previous relevant work;
names and backgrounds of staff who would be responsible for the project;
details of any part of the project which would be subcontracted to another organisation; and
Selection Criteria should also cover:
quality, clarity and relevance of proposed survey design;
expertise in technical and operational aspects of sample surveys; and
- demonstrated ability in undertaking similar work.
Where can I get more help?
Requests for advice or further information about conducting surveys should be directed to:
Telephone: (07) 3035 6418
Facsimile: (07) 3234 0755
See also OESR's Survey page on this site.
Last reviewed 27 February 2013