|Are you interested in contributing to HLWIKI International – hlwiki.ca? contact
To browse other articles on a range of HSL topics, see the A-Z index.
- 8 June 2013
See also Canadian consumer health information (CHI) portal
| Research Portal for Academic Librarians
Survey tools are used to measure various activities and opinions of users in libraries of all kinds. Further, surveys are used to collect quantitative data from users about library services and their perceptions of those services. Due to the rise of evidence-based practice (and formal survey tools like LibQUAL+®), consulting with users and aggregating data is increasingly central to long-term strategic planning.
Surveying individuals and groups is a common assessment activity in libraries. What is the purpose of these surveys? They elicit opinions from a population of users and help with gathering factual information. When questions are administered by librarians face-to-face (F2F) or on the phone, surveys are called structured interviews. The web and social media have changed how surveys are conducted. Although online tools have made the capture and analysis of data more efficient, most aspects of the survey process benefit from easy access to data and users over the Internet.
- easy gathering of information about library services and programs
- generating samples of students and faculty
- creating online questionnaires to be completed anywhere
- collecting survey data quickly and efficiently
- analyzing survey responses using various statistical programs
- generating reports for administration and management of the library
- One of the significant barriers to academic librarians doing surveys online using web 2.0 services is that the data is stored in the United States; most if not all ethics review boards will not approve the research if the data is not stored in Canada. (See relevant FAQ here).
Survey evaluation tool
UBC License to Vovici
- UBC Information Technology. Vovici. Initial Setup
- Why Vovici? All data generated by Vovici is stored in Canada. The interface is easy to use, and boasts advance survey design features, a wide range of reporting elements, the ability to solicit responses from specific groups; excellent support; and downloading data to spreadsheets.
Structure and standardization
Questions in most surveys are structured and/or standardized to reduce bias. There are several criteria for the order in which questions appear:
- from most to least important for the respondents
- difficult or potentially objectionable questions are put at the end
- ordered logically such as chronologically
- questions with identical response modes are put together
The first two criteria help to reduce the chances that respondents will decide to stop answering questions on the survey (they might do so when the first questions are non-salient to them or objectionable). Questions should be ordered in such a way that a question does not influence the response to subsequent questions. Surveys are standardized to ensure reliability, generalizability, and validity. Every respondent should be presented with the same questions and in the same order as other respondents.
Learning more about surveys
Free online survey resources
Advantages and disadvantages of surveys
- efficient way to collect information from respondents; large samples are possible
- statistical techniques can be used to determine validity, reliability, and statistical significance.
- surveys are flexible in the sense that a wide range of information can be collected; used to study attitudes, values, beliefs, and past behaviours.
- Because they are standardized, they are relatively free from several types of errors; relatively easy to administer.
- An economy in data collection due to the focus provided by standardized questions. Only questions of interest are asked, recorded, codified, and analyzed. Time and money is not spent on tangential questions.
- Cheaper to administer
- Depends on subjects’ motivation, honesty, memory, and ability to respond. Subjects may not be aware of their reasons for any given action. They may have forgotten their reasons. They may not be motivated to give accurate answers, in fact, they may be motivated to give answers that present themselves in a favorable light.
- Structured surveys, particularly those with closed-ended questions, may have low validity when researching affective variables.
- Although chosen survey individuals are often a random sample, errors due to nonresponse may exist; people who choose to respond on the survey may be different from those who do not respond, thus biasing the estimates.
- Survey question answer-choices could lead to vague data sets because at times they are relative only to a personal abstract notion concerning "strength of choice". For instance the choice "moderately agree" may mean different things to different subjects, and to anyone interpreting the data for correlation. Even yes or no answers are problematic because subjects may for instance put "no" if the choice "only once" is not available.
Survey questionnaires usually consist of a set of questions, each having a response format (i.e., a way in which the respondent has to answer the question). A distinction is made between open-ended and closed-ended questions. Open-ended questions ask the respondent to formulate his/her own answers. A respondents answer to an open-ended question is coded into a response scale afterwards. Closed-ended questions ask the respondent to make his/her choice from a given number of answer options. The response options for a closed-ended question should be exhaustive and mutually exclusive. Generally, four to seven response categories are recommended; for telephone interviews a maximum of five response categories is advised.
Modes of survey administration
There are several ways of administering a survey, including:
- can use the world wide web or e-mail
- web is preferred over e-mail because interactive forms can be used
- often inexpensive to administer
- very fast results
- easy to modify
- password-protected, easy to manipulate by completing multiple times to skew results
- data creation, manipulation and reporting can be automated and/or easily exported into a format which can be read by statistical analysis software
- data sets created in real time
- some are incentive based (such as Survey Vault or YouGov)
- may skew sample towards younger demographic
- difficult to determine/control selection probabilities, hindering quantitative analysis of data
- used in large scale industries
- respondents are interviewed in person, in their homes (or at the front door)
- very high cost
- suitable when graphic representations, smells, or demonstrations are involved
- often suitable for long surveys (but some respondents object to allowing strangers into their home for extended periods)
- suitable for locations where telephone or mail are not developed
- skilled interviewers can persuade respondents to cooperate, improving response rates
- potential for interviewer bias
- Computer Assisted Personal Interviewing (CAPI): the face-to-face interviewer is assisted by a computer to ask the questions and to record the respondent's answers
- use of interviewers encourages sample persons to respond, leading to higher return rates
- interviewers can increase comprehension of questions by answering respondents' questions.
- fairly cost efficient, depending on local call charge structure
- good for large national (or international) sampling frames
- some potential for interviewer bias (e.g. users may be more willing to discuss a sensitive issue with a female interviewer than with a male)
- cannot be used for non-audio information (graphics, web tools, demonstration samples)
- unreliable for some surveys
- may be handed to respondents or mailed but in all cases are returned to researchers via mail.
- cost is very low, since bulk postage is cheap in most countries
- long time delays, often several months, before surveys are returned and statistical analysis can begin
- not suitable for issues that require clarification
- respondents can answer at their own convenience (allowing them to break up long surveys; also useful if they need to check records to answer a question)
- no interviewer bias introduced
- large amount of information can be obtained: some mail surveys are as long as 50 pages
- response rates can be improved by using mail panels
- members of the panel have agreed to participate
- panels can be used in longitudinal designs where the same respondents are surveyed several
- patrons are intercepted; either interviewed on the spot, taken to a room and interviewed or given a self-administered questionnaire
- socially acceptable - people feel a library is an appropriate place to do research
- potential for interviewer bias
- fast; easy to manipulate by completing multiple times to skew results
The choice between survey administration modes is influenced by several factors: 1) cost, 2) coverage of target population and sampling, 3) flexibility of asking questions, 4) respondents' willingness to participate, 5) response accuracy.
Methods used to increase response rates / reduce nonresponse
Two types of nonresponse are distinguished:
- unit nonresponse (sampled unit did not respond to any of the questions)
- item nonresponse (sampled unit did not respond to some questions).
Item nonresponse can be accidental or intentional. One can deal with nonresponse through preventative methods (prevent occurance of nonresponse) or through curative methods (account for nonresponse during statistical analysis).
Strategies to reduce nonresponses
- brevity - single page if possible
- financial incentives
- paid in advance
- paid at completion
- non-monetary incentives
- commodity giveaways (pens, notepads)
- entry into a lottery, draw or contest
- discount coupons
- promise of contribution to charity
- preliminary notification
- foot-in-the-door techniques - start with a small inconsequential request
- personalization of the request - address specific individuals
- follow-up requests - multiple requests
- claimed affiliation with universities, research institutions, or charities
- emotional appeals
- bids for sympathy
- convince respondent that they can make a difference
- guarantee anonymity
- legal compulsion (certain government-run surveys)
Sample selection is critical to the validity of the information that represents the populations that are being studied. The approach of the sampling helps to determine the focus of the study and allows better acceptance of the generalizations that are being made. Careful use of biased sampling can be used if it is justified and as long as it is noted that the resulting sample may not be a true representation of the population of the study.
Examples of online surveys
- Booth A. A quest for questionnaires. Health Info Libr J. 2003;20(1):53-6.
- Booth A. Mind your Ps and Qs (pitfalls of questionnaires). Health Info Libr J. 2005;22(3):228-31.
- Bourque LB, Fielder EP. How to conduct self-administered and mail surveys. Thousand Oaks, CA: Sage; 1995.
- Couper MP. Web surveys: a review of issues and approaches. Public Opinion Q. 2000;64(4):464–94.
- Couper MP. Whither the web: web 2.0 and the changing world of web surveys. In: Challenges of a Changing World: ICASC. Berkeley, UK: ASC, 2008.
- DiGregorio S, Davidson J. Qualitative research design for software users. Open University Press; 2008.
- Dillman DA. Mail and Internet surveys: The tailored design method. Hoboken, NJ: John Wiley & Sons; 2007.
- Edmunds H. The focus group research handbook. Lincolnwood, IL: NTC Business Books; 1999.
- Edwards PJ, Roberts I, Cooper R. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009;(3).
- Evans R, Burnett D, Kendrick O, Macrina D. Developing valid and reliable online survey instruments using commercial software programs. J Cons Health Internet. 2009;13(1):42-52.
- Eysenbach G. Improving the quality of Web surveys: checklist for reporting results of Internet e-surveys. JMIR. 2004;29;6(3):e34.
- Given LM. SAGE Encyclopedia of Qualitative Research Methods. SAGE Publications, 2008.
- Glitz B. Focus groups for libraries and librarians. New York: Forbes; 1998.
- Greenbaum TL. The handbook for focus group research. Thousand Oaks, CA: Sage; 1998.
- Groves RM. Survey errors and survey costs. New York: Wiley.
- Krueger RA, Casey MA. Focus groups: A practical guide for applied research. Thousand Oaks, CA: Sage; 2000.
- Mangione TW. Mail surveys: Improving the quality. Thousand Oaks, CA: Sage; 2000.
- McCracken G. The long interview. Newbury Park, CA: Sage; 1988.
- Morgan DL. Successful focus groups: advancing the state of the art. Newbury Park, CA: Sage; 1993.
- Morgan DL. Focus groups as qualitative research. Thousand Oaks, CA: Sage; 1997.
- Morgan DL, Krueger RA, King JA. Focus group kit. Thousand Oaks, CA: Sage; 1998.
- Salant P, Dillman DA. How to conduct your own survey. New York: Wiley; 1994.
- Saris WE, Gallhofer IN. Design, evaluation, and analysis of questionnaires for survey research. Hoboken, NJ: Wiley-Interscience.
- Schonlau M, Fricker RD, Elliott MN. Conducting research surveys via e-mail and the web. Santa Monica, CA: Rand; 2002.
- Schuman H. Method and meaning in polls and surveys. Cambridge: Harvard University Press; 2008.
- Shah A. DADOS-Survey: an open-source application for CHERRIES-compliant web surveys. BMC Med Inform Decis Mak. 2006;6:34.
- Talmon J. Statement on reporting of evaluation studies in health informatics. Int J Med Inform. 2009;78(1):1-9.
- Vehovar. Web surveys versus other survey modes: a meta-analysis comparing response rates. Int J Market Research. 2008;50(1):79–104.
- Wagner MM, Mahmoodi SH. A focus group interview manual. Chicago, IL: American Library; 1994.
- Ward D. Getting the most out of web-based surveys. Chicago: American Library Association; 2000.
- Yin RK. Case study research: design and methods. Thousand Oaks; 2003.