DOCUMENT RESUME
HE 033 325
ED 446 513
AUTHOR
TITLE
PUB DATE
NOTE
Underwood, Daniel; Kim, Heather; Matier, Michael
To Mail or To Web: Comparisons of Survey Response Rates and
Respondent Characteristics. AIR 2000 Annual Forum Paper.
2000-05-00
24p.; Paper presented at the Annual Forum of the Association
for Institutional Research (40th, Cincinnati, OH, May 21-24,
2000).
PUB TYPE
EDRS PRICE
DESCRIPTORS
IDENTIFIERS
Research (143) -- Speeches/Meeting Papers (150)
Reports
MF01/PC01 Plus Postage.
College Seniors; Computer Uses in Education; *Electronic
Mail; *Evaluation Methods; Higher Education; Minority
Groups; Sex Differences; *Student Evaluation; *Student
Surveys; *World Wide Web
*Paper and Pencil Tests
ABSTRACT
The purpose of this study was to explore issues arising from
the proliferation in use of electronic survey methods in higher education.
Specifically, it examined whether response rates differed between surveys
-administered utilizing traditional, mailed, paper-and-pencil instruments and
surveys administered utilizing electronic mail (e-mail) and the World Wide
Web. A case study approach provided a two-tiered analysis that included
intra-institutional and inter-institutional findings. The mailed survey began
by sending informational letters to all graduating seniors in the spring
semester, then sending a copy of a survey .2 weeks later. Reminders were sent
through the mail. The Web survey used an e-mail message to notify the entire
freshman and sophomore class of an upcoming survey, then another e-mail to
provide the Web address of the survey. Reminders were sent via e-mail. Data
analysis indicated that regardless of the survey method used, women responded
at greater rates than did men and underrepresented minority students
responded at lower rates than did whites, Asian Americans, and international
students. The response rate was substantially higher for the mail survey than
for the Web survey. (Contains 29 references.) (SM)
Reproductions supplied by EDRS are the best that can be made
from the original document.
To Mail or t Web:
Comparisons of survey response ratess and respondent characteristics
Prepared for the 40th Annual Forum
of the Association for Institutional Research
May 21-24, 2000
Cincinnati, Ohio
Daniel Underwood
Heather Kim
du22@cornell.edu
Michael Matter
hhk4@cornell.edu
mwmakornell.edu
Institutional Research and Planning
Cornell University
440 Day Hall
Ithaca, NY 14853-2801
607-255-7540
607-255-2990 (fax)
OF EDUCATION
U.S. DEPARTMENT
Research and Improvement
INFORMATION
EDUCATIONAL RESOURCES
CENTER (ERIC)
as
This document has been reproduced
received from the person or organization
originating it.
been made to
Minor changes have
improve reproduction quality.
Office of Educational
PERMISSION TO REPRODUCE AND
DISSEMINATE THIS MATERIAL HAS
BEEN GRANTED BY
stated in this
Points of view or opinions
document do not necessarily represent
policy.
official OERI position or
TO THE EDUCATIONAL RESOURCES
INFORMATION CENTER (ERIC)
BESTCOPYAVAILABLE
2
To Mail or to Web:
Comparisons of survey response rates and respondent characteristics
Abstract
The purpose of this study was to explore issues arising from the proliferation in
use of electronic survey methods in higher education. Specifically, we examined
whether response rates differed between surveys administered utilizing traditional,
mailed paper and pencil instruments, and surveys administered utilizing electronic
mail and the World Wide Web (web). A case study approach was used to provide a
two-tiered analysis that included intra-institutional and inter-institutional findings.
Our findings suggest that women respond at greater rates than men regardless of
survey method, and that underrepresented minority students generally respond at
lower rates than Whites, Asian-Americans, and International students, regardless of the
survey method used. These findings suggest that respondent characteristics, rather than
survey method, are tightly coupled to response rates. Hence, we believe that too much
focusin research and practiceon aggregate response rates as the sole basis for
increasing data reliability and validity might obscure some nonresponse data that is
potentially biased by low response rates for men and underrepresented minorities.
3
Introduction
The use of electronic survey research is gaining popularity inside and outside of
higher education. Driven largely by increased cost efficiency, timely availability of data,
and accuracy of data, electronic mail (e-mail) and the World Wide Web (WWW or web)
surveys have emerged as common alternatives to the time honored self-reporting
method of mailed instruments (Couper, 1998; Hayes, 1998; Mertens, 1998: Smith, 1997).
These advantages have not been lost on colleges and universities, particularly in
institutional research where e-mail and web surveys are quickly becoming important
means for collecting information about students (Aisu et al., 1998; Kawasaki & Raven,
1995; Watson, 1998). However, amid the growth in the use of electronic survey methods
in higher education there is a notable absence of research and understanding about how
it affects the practice of institutional research. The purpose of this study is to begin to
develop such an understanding by examining whether these recent innovations in
research methods might differ from traditional mailed paper and pencil means for
collecting self-reported data.
To do this, we turned our focus toward a natural starting point for such inquiry:
survey response rates. Response rates have long been hallowed ground in social science
survey research, leading Fowler (1993) to conclude that "[t]he response rate is a basic
parameter for evaluating a data collection effort"(p. 39). Despite such heralded
importance, research about response rates has been surprisingly narrow in scope and
focuses predominately on methods and strategies for increasing overall response rates
to reduce the likelihood of nonresponse bias (Fowler, 1993; Heberlein & Baumgartner,
1978; Jobber, 1984; Yin, 1994). But for practitioners of institutional research, response
rates typically have value beyond an overall percentage and often help tell a larger
story about who is responding to our data collection efforts. Indeed, it is not unusual for
social science researchers in a variety of contexts to value information about
1
respondents' gender or racial and ethnic backgrounds (Pascarella & Terenzini, 1991;
Van Maanen et al., 1982; Wolcott, 1994).
Hence, we chose to look at how new survey technologies might affect higher
education research by comparing the response rates from Cornell University's recent
administration of two surveys: a survey of graduating seniors and a survey of currently
enrolled students (the Cycles Survey). Both surveys were administered in collaboration
with peer institutions that are members of the Consortium on Financing Higher
Education (COFHE). Specifically, we examine the response rates by overall percentages
and by the respondents' gender, racial, and ethnic characteristics. Cornell provided an
excellent case for such a comparison because the Senior Survey was a traditional,
mailed paper and pencil instrument, while the Cycles Survey was administered
electronically via the web. This produced two discrete experiences that we could
examine in an up close, side-by-side way (Yin, 1994). Moreover, these surveys provided
the particular advantage of being administered relatively contemporaneously (within a
year of each other) and targeting similar subject matters and respondents (Dillman et
al., 1974; Fowler, 1993).
Perhaps the greatest value in studying Cornell's surveys, however, is that it
provided for a two-tiered analysis that includes intra-institutional analysis and interinstitutional analysis. Comparing the two surveys intra-institutionally is useful because
it permits us to explore the largely under-researched issue of how a single institution's
experience might differ between electronic and mail survey administrations. And,
because we can study a survey administered by a set of similar institutions (the Cycles
Survey), we add the value of viewing our data against a backdrop of the experiences of
other schools. In this way, our study begins to address issues surrounding the increased
use of electronic surveys in higher education at both a micro- and macro-level, and in so
doing provides a sorely missed first step in considering how institutional research
might be affected.
5
2
The remainder of this paper is comprised of four sections: theoretical framework,
methods, findings, and conclusions and implications. Initially, the theoretical
framework sets out the research and the literature that formed the basis of our
exploration and interpretations. Next, we discuss the methods used in the
administration of Cornell's electronic and mail surveys that are the subject of our study.
Then, our findings are presented in a two-tiered approach. First, we present our
findings regarding the intra-institutional comparison of the two surveys. Second, we
present inter-institutional findings by comparing Cornell's web surveythe Cycles
Surveyto the experiences of other consortium schools that administered the survey
using both electronic and mail methods. Finally, we present our conclusions and
discuss the possible implications they hold for institutional researchers in particular,
and social scientists generally.
Theoretical Framework
There is a lengthy and voluminous body of research that addresses response
rates in social science survey administration (Donald, 1960; Filion, 1975; Fowler, 1993;
Kish, 1965; Majchrzak, 1984). Despite this history, response rate research can be readily
narrowed into two general categories of study: research that explores methods for
increasing response rates, and research on particular characteristics of respondents. The
former category has received the vast majority of attention, while the latter category has
been substantially less developed. Donald (1960), Kish (1965), Vigderhous (1978),
among others, for example, have argued the paramount concern regarding survey
response rates is to maximize rates so that non-response bias will be reduced. Similarly,
Yin (1994), Fowler (1993), and Majchrzak (1984) tell us that better response rates not
only translate into better data and better statistical inferences, but that policy and
decision making functions are improved because getting information from more
respondents means that more stakeholder voices are being heard.
3
6
The focus on maximizing survey response rates has resulted in a sort of cottage
industry for researchers who have explored methods for increasing rates. Dillman et al.
(1974), for example, developed a seminal method for increasing survey response rates
that is anchored in systematic and repetitive correspondence with members of the
survey population. Others have discovered different means to improving response
rates, including the provision of incentives for responding (Dillman et al., 1974;
Oskenberg et al., 1991), instrument color and format (Heberlein & Baumgartner, 1978),
tailoring the subject matter of the survey to particular populations (Fox et al., 1988), and
coercion or forced participation (Hecht, 1993). And, as Watson (1998) tells us, research
about ways and means of improving response rates continues to receive considerable
attention from social scientists.
Receiving less attention, however, is the study of response rates by respondent
characteristics. Recall that our study sought to develop an understanding of whether
electronic and mail surveys differed by respondent characteristics. As noted, Yin (1994),
Fowler (1993), Majcharzac (1984), and others tell us that understanding "who" our
respondents are is essential to policy decisions because decision makers must know, as
well as possible, who their stakeholders are and who is providing them with the
solicited information. Despite such expressed value, relatively little research has
focused on the differences in response rates by characteristics such as gender or race
and ethnicity. There is some recent research, however, that provides us with a basis for
examining the Cornell surveys. Green and Stager (1986) summarize a lengthy history of
research that suggests response rates differ by gender with women more likely to
respond than men. Similarly, Taylor and Summerhill (1985), as well as Watson (1998)
point to research that suggests response rates might differ by race and ethnicity when
the subject matter of the survey is particularly meaningful to respondents of particular
racial or ethnic backgrounds. Specifically, minorities seem to respond at rates equal to,
or greater, than whites when the subject matter is perceived as directly applicable to
4
them (Watson, 1998). But little more has been done to thoroughly explore the idea that
respondent characteristicsparticularly gender, race, and ethnicitymight affect
response rates.. This is particularly troubling in light of higher education efforts at
increasing our understanding of the needs of traditionally marginalized students,
including women and under-represented minorities (Bowen & Bok, 1997).
Thus, we know quite a lot about the importance of maximizing response rates, as
well as the methods used to increase the rate of response in self-reported social science
survey research. However, we know considerably less about whether the gender, racial,
and ethnic composition of respondents affects their rate of response to such surveys,
except that women generally respond at higher rates than men, and underrepresented
minoritiesby negative implicationmight respond at rates similar to whites only
when the subject matter of the survey is perceived as directly important to them. These
theories and concepts, however underdeveloped, provided us with a framework to
begin our examination of comparative response rates between electronic and mail
surveys.
In addition to these ideas, we drew upon a small but emerging body of research
that explores the differences between electronic and mail surveys. Similar to the
research on respondent characteristics, we know relatively little about whether
response rates for electronic surveys differ from response rates for mailed surveys
(Smith, 1997). However, Smith (1997), Couper and others (1998) examined the results of
numerous electronic surveys and suggest that response rates for e-mail and web
surveys do not necessarily differ from mailed paper and pencil surveys, and that many
of the same techniques for improving the latter will apply toward improving rates for
electronic efforts. Similarly, Watson (1998) saw no appreciable difference in response
rates between the media, and foresees a time when the ease of respondent access to
computer technology will work to increase overall response rates above what we have
grown to expect from mail surveys. Others, however, caution that certain issues might
8
5
curb response rates for electronic surveys, including privacy concerns regarding the
internet (Goree & Marzalel, 1995), a lack of familiarity with computer technology
(Kaminer, 1995), and respondent interest in the subject matter of the survey (Kawasaki
& Raven, 1995).
In summary, there is consensus around the great importance of maximizing
response rates and, to a lesser degree, around the best methods for doing so. There is
less uniformity in thinking around issues of response rates and respondent
characteristics, particularly along gender, racial, and ethnic lines. Moreover, there is a
striking void in research that compares response rates between electronic and mail
surveys, although some concepts have recently emerged that begin to shed light upon
the issue. The exploratory nature of our study is, in part, an effort to untangle these
overlapping, conflicting, and underdeveloped ideas about whether differences exist
between electronic and mail surveys.
As discussed below, our findings help to begin to sort out this theoretical
entangling and take steps toward advancing many of the concepts presented here. Next,
however, we turn to a discussion about the methods we used to develop our findings.
Methods
This study is foremost an exploratory case study of Cornell's experience in
administering two similar self-reporting surveys using different data collection
mediathe web and a paper and pencil instrument. Yin (1994) explains that limiting a
study to a single case is particularly useful for exploratory purposes because it provides
both a basis for describing the phenomena at hand, and for extending emerging theories
to practice and generalizing one experience to a larger theoretical framework. Hence,
this case was particularly appealing because the emerging theories discussed here are
directly applicable to Cornell's experience and offer rich ground for making the
connections contemplated by Yin (1994).
9
6
Additionally, because we explicitly sought to inform our decision making about
the administration of surveys to students, our inquiry includes elements of explanatory
case studiesor what policy researchers most generally refer to as "policy science"
(Fischer, 1995). Fischer notes that policy science includes research aimed at informing
the decision-making processes within an organization, and that examining a single case
is a particularly effective approach to understanding phenomena when there is little
existing research from which to draw information. Hence, our choice of a case study
approach was particularly appropriate for developing a better understanding of issues
surrounding the use of electronic surveys and response rates. The remainder of this
section presents the methods used to administer the web and mail surveys that are the
focus of this study.
Cornell University is a highly selective, private, Research I university, as are the
other schools discussed below. Recall this study focused on Cornell's administration of
two surveys both administered consortially through COFHEa survey of graduating
seniors and the Cycles Survey, Cornell's first administration of a web-based survey. The
Senior Survey was intended to learn more about the undergraduate experiences and
future plans of graduating seniors, and utilized a mailed paper and pencil instrument
for its administration at Cornell. The Cycles Survey assessed enrolled students'
perceptions about a wide-range of their undergraduate experiences, and was
administered at Cornell as an electronic survey via e-mail and the web. Our study
compared the response rates from the two surveys intra-institutionally and interinstitutionally.
Dillman's (1974) strategies for administering surveys were used for both the mail
and web surveys. The mail survey (Senior Survey) began by sending informational
letters describing the upcoming survey to all graduating seniors in the middle of the
spring semester. Two weeks later, a letter and a copy of the survey were sent out. Three
weeks later, a reminder letter and a copy of the survey were mailed to those who had
7
10
not responded. Additionally, incentives for responding were offered, including a
campus store discount coupon for all respondents and a raffle of 30 prizes, the grand
prize being a $1,000 credit for travel. Survey instruments were collected and data were
scanned and entered into a database for analysis.
The web survey (Cycles Survey) used similar methods with newer technologies.
First, an e-mail message was sent to the entire freshman and sophomore classes
immediately following spring break notifying them of the upcoming web-based survey.
A week later, an e-mail message was sent with the web address of the survey so
students could quickly locate it using a web browser. Over the next three weeks, three
reminders with the web address were e-mailed to those who had not completed the
survey. Incentives similar to those used for the Senior Survey were offered to increase
response rates; all respondents received a campus store discount coupon and there was
a raffle of prizes. Respondents completed the survey entirely online and data were
instantaneously entered into a database.
For the intra-institutional analysis, response rates were calculated for both
surveys and reported in nonweighted percentages for comparison in the aggregate and
across respondent gender and race/ethnicity characteristics. For the inter-institutional
analysis, data from the most recent administration of the Cycles Survey were solicited
from other COFHE institutions for comparison to Cornell. We asked for Cycles Survey
data from the other institutions because we knew many had used web surveys in the
administration and this would provide a basis for extending our findings into a larger
context of web-based survey experiences. Additionally, we asked the other schools to
provide aggregated data across respondent gender and race/ethnicity characteristics.
Six COFHE institutionsall private, highly selective, Research I universitiesprovided
data for comparison. The inter-institutional data were collected from entire populations
of students with the exception of two institutions that reported using a stratified
sampling technique for collecting the data presented here.
8
11
Our findings are presented below under the two categories of analysis: intrainstitutional and inter-institutional.
Findings
This section begins with a discussion of our intra-institutional findings, looking
at differences in response rates between Cornell's recently administered web and mail
surveys. Next, we turn to a discussion of inter-institutional findings regarding the
response rates of Cornell and six other COFHE institutions that administered the 1999
Cycles Survey. Both sections examine response rates in the aggregate and across gender
and racial/ethnic characteristics. Moreover, the intra- and inter-institutional findings
share themes that emerged from the data that point to provocative conclusions and
implications.
Our examination of intra-institutional response rates began by looking at the
aggregate rates that resulted from Cornell's administration of the mail and web surveys.
As Table 1 illustrates, the overall response rate for the mail survey (61%) was
considerably higher than the rate for the web survey (36%).
Table 1: Cornell's Overall Response Rates
Survey
Method
Response Rate
'98 Senior Survey
Mail
61%
'99 Cycles Survey
Web
36%
Recall that existing theories and concepts are inconclusive regarding disparate
response rates between electronic and mail surveys. Recall, too, that the same strategies
for increasing response rates were used for both surveys (i.e., Dillman, 1974), they were
administered at about the mid-point of spring semester, and that the surveys were
similar in content, length, and respondents targeted. Nevertheless, we experienced
12
9
widely divergent rates in favor of the mail survey. It must be noted, however, that the
1999 Cycles Survey was the first web-based survey administered by Cornell's Office of
Institutional Research and Planning. Thus, the relatively low response might be
attributablein whole or in partto our lack of familiarity with the medium. Still, the
substantially lower rate caused us to consider looking deeper into our data.
Table 2 sets out the results of our next level of analysis: response rates by gender.
Viewed in this light we see that women responded at much higher rates than men to
both the mail and web surveys. In fact, 70% of the women surveyed via mail completed
and returned the questionnaire compared to 53% of men. Similarly, 42% of the women
completed the web survey versus 30% of the men.
Table 2: Cornell's Response Rates by Gender
Survey
Gender
Method
Male
Female
'98 Senior Survey
Mail
53%
70%
'99 Cycles Survey
Web
30%
42%
These findings suggest that men are less likely than women to respond to survey
questionnaires, regardless of the medium used in administration. Such a finding, if
consistently present, might trouble researchers who seek more balanced response rates
between men and women. However, equally noteworthy was our finding that both
men and women were less likely to respond to our web survey than to our mail survey.
Hence, Table 2 suggests there might be differences in response rates that result from
mail and web surveys, but those differences are not apparent when examined by the
gender of respondents.
13
10
Similarly, Table 3 sets out Cornell's response rate data by the race and ethnicity
of respondents. However, unlike the data regarding gender, the findings here suggest
other respondent characteristics might play a larger role in the response rates achieved
through mail and web surveys.
Table 3: Cornell's Response Rates by Race/Ethnicity
Survey
Race/Ethnicity
Method
White
African A.
Asian A.
Hispanic
Native A.
Intrn'l
'98 Senior
Mail
64%
34%
60%
44%
36%
63%
'99 Cycles
Web
37%
24%
38%
31%
35%
35%
As Table 3 depicts, White and Asian-American students were more likely than
underrepresented minority students to respond to both surveys. Here, the greatest
disparity in these response rates was found in the mail survey, particularly among
African-Americans (a 34% response rate) and Native-Americans (a 36% response rate).
However, the data from the web survey are equally troubling, and are tempered only
when viewed in light of the relatively low response rates for Whites (37%). This relative
view does not overcome the very low rates of response by African-Americans (24%),
Hispanics (31%), and Native Americans. (35%).
Thus, Table 3 presents two issues worth noting. First, the mail survey results
indicate a wide-disparity between the response rates of underrepresented minorities
and the White, Asian-American, and International students. Here, because the latter
groups of respondents comprise the majority of the population surveyed, such disparity
raises questions akin to sampling bias where the aggregate data might represent only
the views of the majority population. The views of the underrepresented students, to
the extent they differ, are subsumed into an aggregate dominated by the majority. The
14
11
second issue raised by the data in Table 3 involves possible bias resulting from the
nonresponse of underrepresented minorities. When response. rates are very low, too
much data might be missing to draw meaningful conclusions about the views of the
population as a whole (Dillman et al., 1974; Fowler, 1993). That is, our data might be
telling us about the views of a narrow sub-group within the population who were
disproportionately inclined to respond. Both potential biases stemming from Table 3
beg a question of growing importance to campuses across the country: are we getting
enough information from underrepresented minority respondents under either survey
method?
In summary, our findings regarding Cornell's experience suggest the emergence
of several themes. First, we found the aggregate response rate for the web survey to be
substantially lower than the rate for the mail survey. Second, we found that both men
and women responded at lower rates for the web survey than for the mail survey, but
that the overall response rates for men were lower for both the mail and web surveys.
Finally, we found that underrepresented minorities responded at lower rates than
Whites and Asian-Americans for both mail and web surveys. Although both surveys
resulted in lower response rates for minorities, the web survey results were particularly
disappointing because they bring into question issues of nonresponse bias even more so
than the rates for the mail survey. Thus, our initial analysis suggested that information
from males and minorities might be further excluded by web surveys than by mail
surveys. However, because these issues arose from our single case study, we
determined that they ought to be interpreted in light of other institutions' experiences
before attempting to generalize to a larger theoretical understanding.
The remainder of this section presents inter-institutional findings that resulted
from comparing Cornell's experience to those of six other COFHE institutions.
Specifically, we compare response rates for the 1999 Cycles Surveys across institutions
and across gender and racial/ethnic categories. Similar to the intra-institutional
15
12
findings above, several themes emerged from this analysis that help us understand our
earlier analyses in a larger context.
The data in Table 4 suggest that aggregate response rates differ widely among
institutions but not necessarily along survey method lines.
Table 4: '99 Cycles Survey Overall Response Rates by COFHE Institutions
Institution
Method
Response Rate
Cornell
Web
36%
Institution 1*
Web
84%
Institution 2
Web
53%
Institution 3
Web
57%
Institution 4
Mail
52%
Institution 5
Mail
41%
Institution 6
Both
42%
*Institution 1 included its web survey as part of the registration for classes for the upcoming semester.
As the table shows, Cornell and the first three COFHE universities listed used
the web to administer the survey. Aggregate response rates for this approach ranged
from a low of 36% (Cornell) to a high of 84% (Institution 1). It should be noted that
Institution l's survey was tied to course registration for the next semester, and its
respondents were essentially "coerced" into responding. Nevertheless, Institutions 2
and 3 experienced response rates over 50% using the web. Furthermore, Institutions 4
and 5 used a mail survey approach that produced response rates of 52% and 41%,
respectively, while Institution 6 used both methods simultaneously and achieved an
aggregate rate of 42%.
Read alongside Cornell's experience (the mail survey produced a much higher
rate than the web), Table 4 tempers our earlier analysis regarding the wide disparity
between response rates that favored mail over web administration. Indeed, the salient
16
13
finding from Table 4 is that the data are inconclusive. Because this finding is consistent
with much of the existing research about response rates, we are inclined to conclude
that our study resulted in no clear evidence that one method of survey administration is
inherently better than the other at achieving higher response rates. However, as will be
shown, we did find Cornell's experience was more consistent with other schools
regarding response rates by gender and racial/ethnic characteristics.
Specifically, as Table 5 sets out, other COFHE institutions found that males
tended to respond to both surveys at lower rates than females.
Table 5: '99 Cycles Survey COFHE Institutions Response Rates by Gender
Institution
Method
Gender
Male
Female
Cornell
Web
30%
42%
Institution 1*
Web
84%
84%
Institution 2
Web
45%
60%
Institution 3
Web
57%
57%
Institution 4
Mail
45%
60%
Institution 5
Mail
37%
46%
Institution 6
Both
37%
47%
*Institution 1 included its web survey as part of the registration for classes for the upcoming semester.
Here, we see that irrespective of method, men responded at lower rates than
women in all but two instances when men responded at rates equal to women. Again,
because this finding is consistent with the larger body of research about response rates,
we are compelled to look closely at the issues it raises, including whether such disparate
response rates affect statistical analyses or undermine the information needs of
institutional decision makers. Because our data were drawn from a limited sample of
institutions (highly-selective, private, Research I), it is difficult to generalize our finding
14
17
that men respond at lower rates than women for both web and mail surveys. However,
we found enough consistency across schools to warrant considerable future attention
from researchers and decision-makers interested in developing a better understanding
of their stakeholders. As we will show, such interest should extend into further
exploration of underrepresented minority student response rates, as well.
Recall that we found substantially lower response rates for Cornell's
underrepresented minority students than its White and Asian-American students for
both the mail and web surveys. As Table 6 illustrates, that finding was generally true
for the other COFHE institutions.
Table 6: '99 Cycles Survey COFHE Institutions Response Rates by Race/Ethnicity
Institution
Race/Ethnicity
Method
White
African A.
Asian A.
Hispanic
Native A.
Intrn'l
Cornell
Web
37%
24%
38%
31%
35%
35%
Institution 1*
Web
85%
61%
72%
70%
100%
71%
Institution 2
Web
54%
43%
56%
49%
63% -
N/A
Institution 3**
Web
76%
55%
66%
52%
88%
96%
Institution 4
Mail
50%
36%
63%
42%
9%
44%
Mail
42%
34%
42%
40%
40%
45%
Institution 5
.
9%
46%
45%
29%
20%
Both
26%
*Institution 1 included its web survey as part of the registration for classes for the upcoming semester.
**Institution 3 permitted double reporting of race/ethnicity categories.
Institution 6
Here we see that White, Asian-American, and International students generally
out respond underrepresented minority students in both web and mail surveys.
Although the disparity ranges between student populations and survey method, the
general trend suggests schools are getting lower responses from students who are often
the most marginalized on campus (Bowen & Bok, 1997). This finding is especially
18
15
troubling because lower response rates suggest a further exclusion of populations who
have been historically excluded from many aspects of higher education, including
institutional research (Bowen & Bok, 1997). Moreover, the administration of the Cycles
Survey at these institutions included questions about campus climate and diversity, and
the information missed from underrepresented minority students would likely be
particularly important to increasing our understanding of these issues. These findings,
because of the importance of the issues raised, compel us to conclude that too little has
been done to understand the nature of minority student participation in survey
research. This conclusion will be addressed again in the final section of this paper.
In summary, our most salient findings suggest that women respond at greater
rates than men regardless of survey method, and that underrepresented minority
students generally respond at lower rates than Whites, Asian-Americans, and
International students, regardless of the survey method used. These findings tell us
those respondent characteristics, rather than survey method, are tightly coupled to
response rates. This coupling raises questions and concerns that we address in the next
and final section.
Conclusions and Implications
Our findings produced several themes worthy of further consideration. As
discussed, comparing Cornell's administration of two surveysone mail and one web-
basedproduced three themes: the aggregate response rate was substantially higher for
the mail survey than for the web survey, women responded more than men, and White,
Asian-American, and International Students responded at higher rates than
underrepresented minority students.
Similarly, when we viewed Cornell's experience against a backdrop of other
COFHE institutions' administration of the Cycles Survey, we found that several themes
emerged. First, unlike our intra-institutional data, the inter-institutional comparison
produced no, consensus around aggregate response rates, irrespective of survey
19
16
method. Second, other schools generally found that males responded less than females,
and that White, Asian-American, and International students respond at higher rates
than underrepresented minorities.
These findings raise several provocative issues for practitioners. First, there is
little empirical evidence that web-based surveys result in lower response rates than mail
surveys. However, our intra-institutional findings suggest that more research is needed
before firm conclusions can be drawn. This subject is surprisingly under-researched
considering the proliferation of electronic surveys and we hope that our study places
the issue among institutional research priorities.
Second, the idea that men respond at lower rates than women is troubling
because of the possibility that it will lead to biased data or misleading analysis. Again,
little contemporary empirical research exists on this issue. We believe that too much
focusin research and practiceon aggregate response rates as the sole basis for
increasing data validity might obscure some nonresponse data that is potentially biased.
Institutional researchers typically place a premium on "good information" for use in
decision-making processes. However, that charge is undermined if potentially biased
data are overlooked or if too little information is collected from essential stakeholders.
Hence, we encourage further research on the differences between the response rates of
men and women as a means to increasing the quality and value of the data and analyses
we produce, as well as to develop a better understanding of why such differences might
exist.
Finally, our findings suggest that too little is understood about lower response
rates of minority students. As discussed, there is a paucity of research on the subject
despite the increased commitment to understanding issues of climate and diversity on
our campuses. Furthermore, the notably low response rates for some minority
populationsparticularly African-Americans, Hispanics, and Native Americans
found by Cornell and our comparator institutions raise concerns about the
20
17
generalizability of the information provided by these students. We contend that
increasing response rates for underrepresented minority students is necessary to
developing an authentic understanding of their needs and roles at our institutions.
Hence, we encourage a vigorous undertaking of research into the response rates of
minority students and possible methods for increasing their survey participation.
These efforts might begin by exploring response rates in other higher education
contexts. Recall the data presented here came from students at seven private, highlyselective research institutions and consequently represent only a small percentage of the
students who participate in US higher education. Thus, it would be helpful to explore
our findings in light of other, broader contexts that include different types of
institutions with students who possess different skills, have different levels of
experience and comfort with computers and the internet, and who come from different
social and economic backgrounds. The proliferation of electronic survey research
compels us to look at how the new medium for collecting information about and from
students unfolds into a larger social context and raises the possibility of further
excluding the most marginalized students from our data collection efforts. Indeed, the
idea that access to, and the use of, computer technology and the Internet differs greatly
along racial, social, and economic lines is gaining popularity. The concept of an
emerging "digital divide" has troubled researchers who have found that access to the
information advantages of cyberspace is woefully lagging for disadvantaged
populations (Ebo, 1998; Perelman, 1998). But as we have demonstrated here, the
Internet is used for more than disseminating information: it is increasingly used for
collecting it, as well. Developing a better understanding of a growing digital divide
along racial, gender, and economic lines seems as important for those who collect
information via the Internet as it is for those who use the web to disseminate
information.
21
18
Thus, we urge others from all types of institutions of higher education to look
closely at how the increased use of electronic surveys might affect the data collected
from underrepresented students. In so doing, the normative ideal of maximizing
aggregate response rates in pursuit of increased quality of data might be refound on a
more comprehensive and inclusive concept of "good data" that seeks to increase
response rates from stakeholders across campuses and beyond.
19
References
Aisu, B., Antons, C., and Fultz, M. (1998). Undergraduate perceptions of survey
participation: improving response rates and validity. AIR 1998 Annual Forum Paper.
Minnesota, MN.
Allen, D. and Fry, R. (1986). Survey administration: computer-based vs. machine
readable. ERIC ED 270470
Berge. Z., and Collins, M. (1996). "IPCT Journal" readership survey. Journal of the
American Society for Information Science, 47(9), 701-710.
Bowen, W., and Bok, D. (1996). The shape of the river: long-term consequences of
considering race in college and university admissions. Princeton: Princeton University
Press
Couper, M. (ed.) (1998). Computer assisted survey information collection. New York:
Wiley.
Dillman, D., Carpenter, E., Christensen, J., and Brooks, R. (1974). Increasing mail
questionnaire response: A four state comparison. American Sociological Review,
39(5), 744-756.
Donald, M. (1960). Implications of nonresponse in the interpretation of mail
questionnaire data. Public Opinion Quarterly, 24, pp. 99-114.
Ebo, B. (ed.) (1998). Cyberghetto or cybertopia: race, class, and gender on the internet.
Westport, Conn.: Praeger
Filion, F. (1975). Estimating bias due to nonresponse in mail surveys. Public Opinion
Quarterly, 39(4), pp. 482-492.
Fischer, F. (1995). Evaluating public policy. Chicago: Nelson-Hall.
Fowler, F. (1993). Survey research methods, 2nd. Newbury.Park: Sage.
Fox, R., Crask, M., and Kim, J. (1988). Mail survey response rate: a method analysis of
selected techniques for increasing response. Public Opinion Quarterly, 52(4), 467-491.
Goree, C., and Marzalek, J. (1995). Electronic surveys: ethical issues for researchers.
College Student Affairs Journal, 15(1), 75-79.
Hayes, B. (1998). Measuring customer satisfaction: survey design, use, and statistical
analysis methods. Milwaukee: ASQC Quality Press.
Heberlein, T., and Baumgartner, R. (1978). Factors affecting response rates to mailed
questionnaires: a quantitative analysis of the published literature. American
Sociological Review, 43, 447-462.
Kaminer, N. (1997). Scholars and the use of the internet. Library & Information Science
Research, 19(4), 329-345.
Kawasaki, J., and Raven, M. (1995). Computer-administered surveys in extension.
Journal of Extension, 33(3).
Kish, L. (1965). Survey sampling. New York: John Wiley.
20
23
Majchrzak, A. (1984). Methods for policy research. Newbury Park: Sage.
Mertens, D. (1998). Research methods in education and psychology: integrating
diversity with qualitative & quantitative approaches. Thousand Oaks, CA: Sage.
Oskenberg, L., Cannell, C., and Ka lton, G. (1991). New strategies of pretesting survey
questions. Journal of Official Statistics, 7(3), 349-366.
Pascarella, E., and Terenzini, P. (1991). How college affects students. San Francisco:
Jossey-Bass.
Perelman, M. (1998). Class warfare in the information age. New York: St. Martins Press.
Smith, C. (1997). Casting the net: surveying an internet population. Journal of Computer
Mediated Communications, 3(1), 77-84.
Van Maanen, J., Dabbs, J. M., Jr., and Faulkner, R. R., (1982). Varieties of qualitative
research. Beverly Hills, CA: Sage.
Wolcott, F. (1994). Transforming qualitative data: description, analysis, and
interpretation. Thousand Oaks, CA: Sage.
Watson, S. (1998). A primer in survey research. Journal of Continuing Higher
Education, 46(1), 31-40.
Yin, R. (1994). Case study research: design and methods. Newbury Park: Sage.
Vigderhous, G. (1978). Analysis of patterns of response to mailed questionnaires. In
Survey Design and analysis: current issues, Duane F. Alwin, Ed. Sage: Beverly Hills.
U.S. Department of Education
Office of Educational Research and Improvement (OERI)
National Library of Education (NLE)
Educational Resources Information Center (ERIC)
IC
NOTICE
Reproduction Basis
This document is covered by a signed "Reproduction Release
(Blanket)" form (on file within the ERIC system), encompassing all
or classes of documents from its source organization and, therefore,
does not require a "Specific Document" Release form.
This document is Federally-funded, or carries its own permission to
reproduce, or is otherwise in the public domain and, therefore, may
be reproduced by ERIC without a signed Reproduction Release form
(either "Specific Document" or "Blanket").
EFF-089 (3/2000)