Nothing Special   »   [go: up one dir, main page]

Academia.eduAcademia.edu

TO SHOW HOW WE CARE: COMBINING WEB-BASED TECHNOLOGY AND INTERNATIONAL STUDENT NEEDS ASSESSMENT

2000, North East Association for Institutional Research

North East Association for Institutional Research 27th Annual Conference PROCEEDINGS Pittsburgh Hilton Pittsburgh, PA November 4-7, 2000 Bridges to the Future: Building Linkages for Institutional Research VOLKWEIN VERSES for NEAIR Nineteen seventy-four Is a year that we adore. Thirty-three members met to begin this organization In Williamstown, Mass., they deserve an ovation. We now boast 300 passengers on our institutional research train That stretches from Cape Cod to Ohio, and Virginia to Maine. With NEAIR maturity that is twenty-seven years long, We welcome you to Pittsburgh with a program that is very strong. These stellar Local Arrangements have indeed been very nice, So please join me in applauding the hard-working Gary Rice. Our Conference Program ranges from qualitative methods to techy, So also please join me in thanking this Brodigan named Becky. And without Beth Simpson we would not be understood. Every day of the week, she makes us look good. We also give warm appreciation to Past President Karen Bauer. In her three Steering Committee years, our organization did flower. To implement a decision that was collectively brainy, I now pass the torch to Anne Marie Delaney. I have deeply appreciated all the volunteer services rendered. NEAIR is an organization that is many-splendored. Fred Volkwein NEAIR President, 1999-00 1999 - 2000 Steering Committee Officers Members-at-Large President Fred Volkwein Pennsylvania State University Peggye Cohen George Washington University President-elect Anne Marie Delaney Babson College Corby Coperthwaite Manchester Community College Past President Karen Bauer University of Delaware Anne Marie Delaney Babson College Secretary Eleanor Swanson Monmouth University Jim Fergerson Bates College Treasurer Mary Ann Coughlin Springfield College Steve Thorpe Drexel University Rob Toutkoushian University System of New Hampshire 2000 Conference Chairs Program Chair Becky Brodigan Middlebury University Publications Heather Kelly Isaacs University of Delaware Local Arrangements Chair Gary Rice Indiana University of Pennsylvania Membership Secretary Beth Simpson HEDS Consortium ii TABLE OF CONTENTS I. Papers, Panel Presentations, and Work Shares The Influence of Personality Traits, Pre-College Characteristics, and Co-Curricular Experiences on College Outcomes Karen W. Bauer …………………………………………………………………………..1 Threading the Developmental Maze: Remedial Program Complexity and Student Progress at a Large, Suburban Community College Karl Boughan …………………………………………………………...……………….15 Student Self-Perceived Gain Scales as the Outcome Measures of Collegiate Experience David X. Cheng ………………………………………………………………………….27 Institutional Researchers: Challenges, Resources and Opportunities Anne Marie Delaney …………………………………………………………………….39 Responsibilities and Staffing of Institutional Research Offices at Jesuit and Prominent Other Catholic Universities Donald A. Gillespie ……………………………………………………………………...53 New Technology and Student Interaction With the Institution Gordon J. Hewitt and Dawn Geronimo Terkla …………………………………………65 Developing a Web-Based Version of The College Board’s Admitted Student Questionnaire™ Ellen Kanarek ……………………………………………………………………...……77 Creation of a Scale to Measure Faculty Development Needs and Motivation to Participate in Development Programs Arthur Kramer …………………………………………………………………………..89 The Transformational Power of Strategic Planning Marsha V. Krotseng and Ronald M. Zaccari …………………………………………111 To Show How We Care: Combining Web-Based Technology and International Student Needs Assessment Survey Tsuey-Ping Lee and Chisato Tada ……………………………………………………121 iii Developing an Analysis of Outcomes for the Writing Proficiency Requirement Kevin B. Murphy …………………………………………………………………….…135 Adult Education in the 1990s: An Analysis of the 1995 National Household Education Survey Database Mitchell S. Nesler and Roy Gunnarsson ……………………………………………….143 Curriculum Review at a Virtual University: An External Faculty Panel Approach Mitchell S. Nesler and Amanda M. Maynard ………………………………………….157 The IR-CQI Connection Tracy Polinsky …………………………………………………………………………165 We Can’t Get There in Time: Assessing the Time between Classes and Classroom Disruptions Stephen R. Porter and Paul D. Umbach ……………………………………………….175 Assessing the Assessment Decade: Why a Gap Between Theory and Practice Fuels Faculty Criticism Michael J. Strada ………………………………………………………………………187 Structural/Organizational Characteristics of Higher Education Institutions Leading to Student Performance, Learning, and Growth: A Response to Accountability and Accreditation Forces in Two and Four Year Sectors Linda C. Strauss and J. Fredericks Volkwein ………………………………………….197 Using Qualitative Analytical Methods for Institutional Research Carol Trosset …………………………………………………………………………..209 Assessing Outcomes for School of Business Majors Using a Primary Trait Analysis David W. Wright and Marsha V. Krotseng …………………………………………….219 The Impact of Remedial English Courses on Student College-Level Coursework Performance and Persistence*** Meihua Zhai and Jennie Skerl …………………………………………………………233 II. 2000 Conference Program ………………………………………………………..245 *** Meihua Zhai and Jennie Skerl’s paper was selected for the 2000 Best Paper Award. iv THE INFLUENCE OF PERSONALITY TRAITS, PRE-COLLEGE CHARACTERISTICS, AND CO-CURRICULAR EXPERIENCES ON COLLEGE OUTCOMES Karen W. Bauer1 Associate Director, Institutional Research & Planning University of Delaware Abstract The relationship between students’ pre-college characteristics, personality traits and cocurricular activities with academic achievement and critical thinking was examined in a sample of 252 engineering, science, math, and psychology undergraduates enrolled at a selective Carnegie I-Extensive doctoral-granting university. Results show that personality traits influence college outcomes both directly and indirectly through cocurricular activities even after controlling for pre-college characteristics, such as SAT score and high school GPA. Compared to personality traits, pre-college characteristics show larger effects on GPA and critical thinking skills. Acquisition of content knowledge and critical thinking are critical components of intellectual development as well as measures of college success. Among college outcomes, achievement has been one of the most frequently researched topics in higher education (Astin, 1977), and critical thinking skills are regarded as one of the major outcomes of college education (Pascarella, 1989; Facione, Sanchez, Facione, & Gainen, 1995). College outcomes have been investigated using various predictors, including general verbal abilities, aptitude test scores (e.g., SAT and ACT), sex, family financial characteristics, in and out-of-class experiences, and personality traits (Astin, 1977, 1993; Ting & Robinson, 1998; Pascarella, 1989; Pascarella, Whitt, Edison, Nora, Hagendorn, Yeager & Terenzini, 1996, 1997; Child, 1969; Entwistle & Entwistle, 1970, Digman & Takemoto-Chock, 1981). Although it is likely that personality traits are related to students’ participation in activities, and that activities also influence the outcomes, few studies have investigated these variables simultaneously. Among studies that investigated college outcomes using pre-college characteristics, SAT (or ACT) scores and high school GPA consistently explain the largest variance in college outcomes. In predicting the first year grade point average (GPA), high school GPA predicted the largest, unique variance (Ting & Robinson, 1998); in predicting critical thinking skills, general verbal ability explained the largest variance. Using the California Critical Thinking Skills Test (CCTST), Jacobs (1995) found that critical thinking is highly correlated with SAT verbal scores, and Ackerman (1999) reported a strong relationship between knowledge and general intelligence (g). Pascarella (1994) 1 The author wishes to thank former postdoctoral researcher, Hye-Sook Park, for her assistance in data analysis for this project. 1 found that pre-college characteristics influence not only the outcomes of college directly, but also indirectly the outcomes through college course-taking activities, formal classroom experience, and out-of-class experiences. However, according to Mouw and Khanna (1990), the unique effect of pre-college predictors (e.g., high school GPA, college entrance tests) for college success was low. Due to a high correlation between these predictors, the unique contribution of each predictor on college outcome was small; thus, a large amount of variation on college GPA still needs to be explained. To investigate the unclaimed variance in GPA, some researchers have examined personality characteristics as an additional predictor of college performance (Tross, Harper, Osher, & Kneidinger, 2000). Findings on the predictive ability of personality characteristics are somewhat mixed; some researchers (e.g., Biggs, Roth, & Strong, 1970; Evans, 1970; Morgan, 1972) report that personality characteristics were not related to student GPA when aptitude characteristics such as SAT were controlled. Other researchers, however, found that personality type influences one’s activities (Hooker, Frazier, & Manahan, 1994) and, in particular, college behaviors and outcomes (Tross, et al, 2000; Digman & Takemoto-Chock, 1981; Entwistle, 1972). The effect of personality traits on achievement also varies depending on ability and age level. Entwistle’s (1972) review of studies involving Cattell’s 16 Personality Factors and Eysenck’s Personality Inventory concluded that success at the university level is associated with introversion, but at the primary school level, success is related to stable (low neuroticism) extroversion. According to Child (1969), both introversion and neuroticism are advantageous traits for university students’ academic achievement because introverts avoid social situations and enjoy bookish and abstract/conceptual pursuits, while neurotics have a higher level of internal drive. Entwistle & Entwistle (1970) partly attribute introverts’ higher academic achievement to their good study habits. Additionally, Digman & Takemoto-Chock (1981) found that conscientious students are well-organized, purposeful, and persistent, and that these characteristics are highly related to academic achievement (e.g., GPA). However, these studies did not control for students’ pre-existing aptitudes in investigating effect of personality on cognitive outcomes. Thus, it is difficult to justify the effect of personality traits on academic attainment objectively. Researchers have also studied how out-of-classroom experiences influence college students’ academic, intellectual or cognitive outcomes. For example, Inman & Pascarella (1998) found that college attendance positively affects the development of critical thinking skills. Other researchers found a positive association between the nature and frequency of students’ out-of-class contacts with faculty members and gains on measures of academic or cognitive development. For example, students’ participation in internships or study-abroad experiences was related to higher grades and to self-reported gains in knowledge of a particular discipline (Astin, 1993; Kuh, 1995). Also students’ out-of-class interactions contribute to gains in general knowledge, critical thinking skill and problem solving skills (e.g., Astin, 1993; Baxter-Magolda, 1992; Kuh, 1995; 2 Terenzini, Springer, Pascarella & Nora, 1995). Pascarella, et al. (1989) found that college experiences were modestly associated with higher critical thinking skills, while the composite college experience scale (e.g., type of course work, non-classroom interaction with faculty, study time, and extra-curricular activities) showed significant correlation with overall critical thinking skills. Despite some possible causal relationship among these variables, few studies have investigated the dynamic relationships among students’ personality types, college students’ co-curricular activities, and students’ cognitive outcomes. Thus, this study investigated how personality traits affect students’ involvement in co-curricular activities and college outcomes, both directly and indirectly via co-curricular activities. Research questions for this study are: 1. What is the relationship between first year students’ personality types, cocurricular activities, and end-of-first year outcomes (defined as end of first year GPA and critical thinking score)? 2. Does personality type predict first year GPA and critical thinking score? 3. Do pre-college characteristics predict first year GPA and critical thinking score? 4. Do first year college co-curricular activities affect first year GPA and critical thinking score? Method Participants Participants were 264 undergraduate students who are part of a four-year longitudinal study funded by the National Science Foundation to assess academic and psychosocial effects of involvement in various college activities. The majority of students majored in science, engineering, and psychology at a Carnegie I Doctoral/Research-Extensive state university. Of the original sample, 252 students were included in this study.2 Among them were 149 females (59%) and 103 males (41%), 193 White (76.5 %) and 59 nonWhites (23.5%). Table 1 shows the descriptive statistics of this sample. Table 1. Summary statistics and internal consistency estimates of CSEQ Quality of Effort (QE) Subscales, NEO-Five Factor Inventory (FFI), and Watson-Glaser Critical Thinking Appraisal (WGCTA). Variable N Mean SD Alpha CSEQ QE-Library Experiences Scale 250 19.61 4.45 0.75 CSEQ QE-Experiences with Faculty Scale 252 19.68 4.74 0.83 CSEQ QE-Course Learning Scale 251 27.79 5.09 0.80 CSEQ QE-Art, Music, Theater Scale 247 18.48 5.02 0.75 CSEQ QE-Student Union Scale 252 22.25 5.36 0.79 2 Among these 264 students, twelve students were not included because their factor scores on student union activity and arts were considered as outliers (3 SD above the mean). 3 Variable CSEQ QE-Athletic/Recreation Facilities Scale CSEQ QE-Clubs and Organizations Scale CSEQ QE-Experience in Writing Scales CSEQ QE-Personal Experiences Scale CSEQ QE-Student Acquaintances Scale CSEQ QE-Science/Technology Scale CSEQ QE-Dormitory/Fraternity/Sorority Scale CSEQ QE-Topics of Conversation Scale CSEQ QE-Information in Conversation NEO-Neuroticism NEO-Extroversion NEO-Openness to Experience NEO-Agreeableness NEO-Conscientiousness Watson-Glaser Critical Thinking Appraisal N 250 250 251 252 252 246 240 246 248 251 251 251 251 251 252 Mean 18.59 17.20 25.25 22.34 24.81 24.23 24.03 20.76 13.97 22.10 30.36 30.12 32.40 32.65 29.54 SD 5.87 6.37 6.14 5.28 5.81 6.64 5.56 5.14 3.06 8.08 6.47 6.07 5.98 6.69 5.33 Alpha 0.81 0.89 0.88 0.77 0.84 0.88 0.85 0.80 0.76 0.86 0.81 0.73 0.77 0.83 0.77 Instruments Information about these students were collected from the university’s student records database and by using the following three published measures: the NEO-Five Factor Inventory (NEO-FFI; Costa, Jr.& McCrae, 1991), the Watson-Glaser Critical Thinking Appraisal (WGCTA; Watson & Glaser, 1994), and the College Student Experiences Questionnaire (CSEQ; Pace, 1984). NEO-Five Factor Inventory (NEO-FFI). The NEO-FFI measures the most basic dimensions underlying human traits. There are five subtests in this inventory, each composed of twelve items. The five subtests are: 1) NEO-Neuroticism measures an individual’s level of adjustment and emotional stability (coefficient alpha= 0.86, n=12); 2) NEO-Extroversion measures level of sociability and consequent behaviors that occur as a result of interactions with others (coefficient alpha= 0.81, n=12); 3) NEO-Opennessto-Experiences measures imagination, aesthetic sensitivity, attentiveness to inner feelings, preference for variety, intellectual curiosity, and independence of judgment (coefficient alpha= 0.73, n=12); 4) NEO-Agreeableness measures level of sympathy and altruism toward others, eagerness to help (coefficient alpha= 0.77, n=12); and 5) NEOConscientiousness measures ability to manage impulses and desires and the process of planning, organizing, and carrying out tasks (coefficient alpha= 0.83, n=12; Costa & McCrae, 1991). Watson Glaser Critical Thinking Appraisal (WGCTA). The WGCTA is a composite measure that examines attitudes of inquiry, knowledge of the nature of inferences, abstractions, and generalizations; and skills in employing the above attitudes and knowledge (Watson & Glaser, 1994). The WGCTA Form S consists of 40 items measuring five subtests of critical thinking: inference; recognition of assumptions; 4 deduction; interpretation; and evaluation of arguments. The WGCTA data for this sample has a reliability of 0.77. College Student Experiences Questionnaire (CSEQ). The CSEQ examines students’ quality of effort put forth with various college activities, level of satisfaction with the campus environment, perceptions of the campus environment (emphasis on scholarly, aesthetic, and vocational issues), and perceived annual gain in a series of academic and personal items. The CSEQ is composed of 14 quality of effort composite scales that measure level of student engagement, seven questions that query perceptions of the academic environment, and 21 items related to academic and social growth during the current college year. The reliability of the scores on the quality of effort scales ranged from 0.75 to 0.91 with an average of 0.85 Procedure After receiving approval from the University’s Human Subjects Committee, researchers sent a letter to freshmen level students majoring in science, math, and psychology requesting their participation in a study of their academic experiences. Two hundred sixty-four students agreed to participate and met with the researchers to complete several questionnaires including the CSEQ, WGCTA, and NEO-FFI. Each survey took approximately 15 to 30 minutes to complete. A maximum of 30 minutes was allowed to complete the WGCTA. A signed consent form also enabled the researchers to obtain demographic data from the university’s student record system (i.e., high school GPA, sex, ethnic classification, SAT, and cumulative GPA). Students were given $5.00 for their participation in this study. Data Preparation Four students’ SAT scores were not available (matriculated from a foreign country), and two students had extremely low scores and thus not included. Imputed values were created for the missing cases in order to maintain the sample size. In the CSEQ Quality of Effort subscales, some patterns of missing values were found. It seemed that students who thought the items were not directly related to them simply did not respond to these items. In order not to treat these as missing values at random or deleting these cases, imputed values were created by assigning the lowest value found in each item,3 assuming that non-responding students simply did not respond or skipped the items. Three CSEQ composite scores based on conceptually-related items were created and treated as endogenous variables4: 1) academic activities, items related to academic and cognitively oriented activities: experience in writing, library use, course learning, and 3 4 In the Amos program, bootstrapping was not possible with missing values. We imputed these missing values not to reduce the sample size (n=16). Principal component exploratory factor analysis of the CSEQ quality of effort subscales was attempted, but the screen plot showed only one factor. 5 experience with faculty; 2) conversation, items: topics of conversation, information in conversation, and personal acquaintance; and 3) club/union, items: club, student union, and campus residence activities. Results A non-recursive path analysis model was built to examine how personality traits influence college outcomes both directly and indirectly through mediation of cocurricular activities. See Figure 1 below. HGPA*SAT HighGPA SAT WGCTA NEO-N Academic e1 e3 NEO-E Spring '97 GPA NEO-O Conversation NEO-A e2 e4 NEO-C Club/Union Sex e5 Figure 1. The Influence of Personality Traits and College Activities on College Outcomes In this model, personality trait scores, SAT score, high school GPA, interaction between SAT scores and high school GPA, and sex were used as exogenous variables; co-curricular activities served as mediating endogenous variables; Watson-Glaser critical thinking skills score and spring ’97 GPA were used as endogenous variables. The interaction effect was investigated by grand-mean centering of each predictor to avoid the problem of mulitcollinearity. An interaction effect between SAT total score and high school GPA on spring ’97 GPA was significant, so interaction terms were incorporated into the path model. Table 2 shows the correlation among variables in the model. The path model indicated a good fit (RMSEA= 0.0455; NFI = 0.997; TLI6 =0.995; χ2= 34.584; df = 23; p = 0.057; n=252).7 The Amos 4.0 (Arbuckle, 1998), which employs 5 6 90 percent CI of RMSEA ranged from 0.00 to 0.074. The Tucker-Lewis Index (TLI) is also known as the Bentler-Bonett non-normed fit index (NNFI). 6 maximum-likelihood estimates of parameters, is generally robust in the violation of assumptions with a simple model. However, due to the complexity of our model, and to avoid the violation of normality assumption, we deleted those outliers that were more than three standard deviations above the mean in each variable. The model without outliers indicates a better fit compared to the one with outliers, therefore these parameters can be interpreted with confidence. Table 2 Correlation Matrix of the Variables in the Path Model Hgpa Hgpa SAT Sex SAT Sex N E O Conver Acad Union WG S97gpa 0.26 ** ---- 0.02 E -0.01 O -0.08 0.06 A 0.10 -0.08 C 0.19 ** -0.16 * -0.18 ** -0.24 ** 0.21 ** -0.03 Acad C 0.38 ** ----0.09 N Conver A ---- -0.09 -0.11 ---- -0.17 ** -0.26 ** -0.35 ** ---0.00 -0.02 -0.28 ** -0.13 * 0.28 ** -0.13 * -0.19 ** -0.21 ** 0.05 0.09 -0.02 ---0.06 ---0.13 * 0.22 ** 0.41 ** 0.08 -0.17 ** -0.18 ** -0.17 ** 0.19 ** 0.44 ** 0.10 0.00 ----0.11 ---- 0.42 ** 0.49 ** ---- 0.20 ** 0.43 ** 0.39 ** Union 0.09 -0.20 ** -0.12 0.29 ** 0.06 0.09 WG 0.16 ** 0.63 ** 0.24 ** -0.06 -0.18 ** 0.17 ** 0.04 -0.23 ** -0.16 * -0.15 * S97gpa 0.48 ** 0.39 ** 0.07 -0.17 ** 0.00 0.05 0.21 ** -0.15 ** 0.15 * 0.00 ---0.19** ----0.06 0.25 ** ----- Note: **: p<0.01, *: p<0.05 Hgpa represents high school GPA; Conver reperesents CSEQ conversation-related activities; Acad represents CSEQ academic activities; and Union represents CSEQ student union/club/campus residencerelated activities. Personality Traits Influence Critical Thinking Skills and Spring ’97 GPA The effect of students’ personality traits on students’ cognitive outcomes were statistically significant even after controlling for the pre-college characteristics. As shown in Tables 3 and Figure 1, the effect of NEO-Openness-to-Experiences on WGCTA was significant. This means that when the effects of other predictors in the model were controlled, a one standard deviation increase in NEO-Openness-to-Experiences scale was related to a 0.138 standard deviation increase in WGCTA. In addition, NEOAgreeableness had a positive effect on WGCTA, and NEO-Conscientiousness also had a positive effect on spring ’97 GPA. However, NEO-Extroversion had a negative effect on spring ’97 GPA. 7 A just identifiable (saturated) model with 119 parameters had the value of 1 in NFI and p-value, and 0.948 in ECVI, while the ECVI of our default model with 96 parameters was 0.903. 7 Table 3. Standardized Effect of Personality on Grades and Critical Thinking Skills Outcome Neuroticism Extroversion Openness Agreeableness Conscientiousness Direct _____ -0.111* _____ _____ 0.152** Indirect -0.020 -0.045 0.046 0.000 0.050 Total -0.020 -0.156 0.046 0.000 0.202 Direct _____ _____ 0.124* _____ Indirect 0.000 -0.043 -0.008 0.000 -0.020 Total 0.000 -0.043 0.129 0.124 -0.020 Spr’97GPA WGCTA 0.138** ** : p<.01, *: p<.05; _____ represents parameters are not obtained Influence of Pre-college Characteristics In addition to personality traits, pre-college characteristics were found to have a significant effect on academic and cognitive outcomes. The effect of SAT score on WGCTA was 0.632 (p<0.001), which means that when holding all other variables in the model constant, a one standard deviation increase in total SAT scores was related to a 0.63 standard deviation increase in WGCTA. (The standardized path coefficient of sex on WGCTA was not significant.) The effect of total SAT score on spring ’97 GPA was 0.295 (p<0.01) and the effect of high school GPA on spring ’97 GPA was 0.343 (p<0.01). There was also an interaction effect of high school GPA and SAT on the spring ’97 GPA, which indicates that the effect of SAT scores on the spring ’97 GPA depends on students’ high school GPA. Additionally, the model with only three pre-college characteristics (i.e., SAT scores, high school GPA, and sex) explained 40 percent of the variance in WGCTA, and eight percent of variance8 in the spring ’97 GPA. A simple regression analysis using personality characteristics and activities yielded a model that explained 15 percent of the variance of the spring ’97 GPA and 17.5 percent of the variance of the WGCTA9. Thus, incorporating personality characteristics and co-curricular activities into the model was appropriate. 8 9 When a simple regression was run using three pre-college characteristics (i.e., sex, high school GPA, and SAT), they explained about 29 percent of variance in spring ’97 GPA. Using only personality traits as independent variable, the model explained 10 percent of variance in spring 97 GPA and 13.5 percent of variance in WGCTA respectively. The three co-curricular related activity variables explained 9 percent of variance in spring ’97 GPA and 5 percent of variance in WGCTA respectively. 8 Personality Traits Influence Students’ Engagement in Co-Curricular Activities In addition to influencing critical thinking and grades, Table 4 shows that personality traits influence students’ engagement in co-curricular activities. Students with high scores on the NEO-Neuroticism scale were less likely to be engaged in academicallyoriented activities. Students who scored high on the NEO-Extroversion scale were more likely to be engaged in club and student union-related activities and to engage in social/interpersonal communication (conversation)-related activities. Students who scored high on the NEO-Openness-to-Experience scale were more likely to spend time in academic activities, but they were also more likely to spend time engaging in social/interpersonal communication (conversation)-related activities. Students who scored high on the NEO-Conscientiousness scale were more likely to engage in academic/learning-related activities and also were more likely to engage in student union/club activities. Table 4. Standardized Effects of Personality on Co-Curricular Activities Neuroticism Extroversion Openness Agreeableness Conscientiousness Academic -.099* ____ .360** ____ .328** Convers ____ .164** .394** ____ ____ Union ____ .224** ____ ____ .115** ** : p< .01, *: p< .05; _____ represents parameters are not obtained Results of the path model also showed a sex difference in conversation-related activities (β=-0.144, p<0.01). Females were more likely to participate in conversation/personal acquaintance-related activities. In addition, students with higher GPAs were more likely to engage in academically-oriented activities (β=0.159, p<0.01), but students with high SAT scores were less likely to engage in academically-oriented activities (β=-0.185, p<0.01). Direct Effect of Co-curricular Activities on Cognitive Outcomes As shown in Table 5, there was also a significant relationship between involvement in college activities and cognitive outcomes. Students’ engagement in club/union/campus residence activities was associated with lower scores in both WGCTA and spring ’97 GPA. The effect of conversation-related activities on WGCTA and spring ’97 GPA did not show any statistical significance. However, academically-oriented activities showed a positive effect on spring ’97 GPA. These results indicate that participation in student union/club/activities has a negative effect on both WGCTA score and GPA. 9 Table 5. Standardized Effect of Activities on Cognitive Outcomes Outcome Academic Conversation Club/Union Spring 97GPA .205** -.072 -.149** WGCTA ____ -.021 -.176** * * : p< .01 *: p<.05; _____ represents parameters are not obtained Discussion This study explores the relationship among students’ pre-college characteristics, personality traits, co-curricular activities, academic achievement, and critical thinking score. Results indicate that personality does influence students’ achievement and critical thinking. NEO-Agreeableness and NEO-Openness-to-Experience were positively and significantly related to WGCTA. Results suggest that students who are more extroverted will earn lower grades than peers who are less extroverted, and those who are more conscientious will earn higher grades than peers who are less conscientious. Unlike previous studies involving college students (Child, 1969), neuroticism does not seem to be a driving force for attaining high GPA. Additionally, since extroverted students were more likely to spend time on clubs and student-union related activities, it is possible that students who obtained high scores on the extroversion scale may devote less time to study which might lead to lower GPAs. After controlling for academic aptitude and personality traits, only academicallyrelated co-curricular activities were positively and significantly related to GPA. This finding underscores the importance of students’ involvement in academic activities because academically-oriented activities contribute to higher GPAs. Note also that the effect of club/student union/campus residence-related activities on WGCTA was negative, a finding similar to that of previous studies (Pascarella et al, 1996). In addition, different types of activities influence the two academic outcomes differently, which may indicate the two college outcomes are measuring different things. Overall, the effects of pre-college characteristics such as SAT scores and high school GPA were larger than any other predictors for college outcomes. This result confirms Ting and Robinson’s findings (1998). The effect of co-curricular activities such as academic and club/union/residence hall-related activities were larger than the direct effect of personality traits on WGCTA and GPA with the exception of NEO-Conscientiousness on spring 97 GPA. This finding indicates that, for this sample, the effects of personality traits were relatively minor. However, personality traits influence college outcomes via students’ engagement in co-curricular activities because the effects of personality on students’ engagement in co-curricular activities were moderately high. 10 Implications for Faculty and Administrators Results from this study broaden previous findings on the relationship between personality type and college outcomes (Entwistle & Entwistle, 1970; Entwistle, 1972; Digman & Takemoto-Chock, 1981). In addition to pre-college characteristics reflected in SAT, high school GPA, and co-curricular activities, this study indicates that personality traits have a relatively small (but significant) effect on college outcomes of GPA and measure of critical thinking. However, personality has relatively larger impact on students’ engagement in co-curricular activities, which in turn influence academic outcomes directly. Thus, knowledge of personality traits may enable faculty and staff to facilitate students’ learning in an effective way. For example, students who score high in NEO-Extroversion or NEO-Openness-to-Experience are likely to explore such programs as undergraduate research, study-abroad, or major-related internships. More so than other students, student leaders, for example, may achieve cognitive and affective benefit due, in part, to their level of extroversion or conscientiousness. Student knowledge of their own personality traits can help make wise choices about college activities. Freshman year curricular and co-curricular choices can act as a scaffold to further students’ breadth of experiences and consequent increases in critical thinking skills. With knowledge of students’ personality scores in hand, faculty and advisors can suggest activities that achieve a good fit, or conversely, recommend activities that may not match student’ personality. For example, a student who scores highly on extroversion may thrive in public speaking activities whereas another who scores low on this trait will not. Similarly, a student who scores low on Openness-to-Experience will not likely enjoy nor benefit from study abroad or organizing a new student club. Accessibility to these activities would likely influence students’ engagement in cocurricular activities and further cognitive outcomes. Since measurable cognitive gains increase gradually over a number of years (Ackerman, 1999), it is also important that college officials help students understand that a variety of activities nurture cognitive growth and thus encourage students to become or remain active in volunteer community service, research with faculty mentors, and/or participate in major-related internships throughout their baccalaureate experience. Limitations Limitations of this study are related to external validity and length of study. Because of the self-selected nature of participants, the sample was not randomly selected, thus, limiting generalizability.10 Due to the self-report nature of data, responses on the survey may not accurately convey their efforts in all activities. Since some of the activities are 10 Based on one-sample t-test using SAT scores, all students except those in animal science, civil engineering, and psychology department were representative samples of the department. 11 socially more desirable than others, it is possible that students might choose those activities based on social acceptability rather than true interest. Finally, this study examines the relationship between personality traits, students’ engagement in activities, and college outcomes during first year of baccalaureate studies. Thus, it is not known if students’ engagement in college activities is a continuation of their high school activities, nor if the same co-curricular activities affect college outcomes throughout the baccalaureate experience. Additionally, it is also not known how and whether personality traits change over time and affect students’ engagement in activities differently. Thus, it would be more meaningful if similar research questions are investigated in a longitudinal fashion that employs growth modeling. References Ackerman, P. L. (1998). Traits and knowledge as determinants of learning and individual differences: Putting it all together. In P. L. Ackerman, P. C. Kyllonen, & R. D. Roberts (Eds.), Learning and individual differences. (pp.437-460). Washington, D.C.: American Psychological Association. Arbuckle, J. L. (1998). Amos 4. Chicago: Small Waters Corp. Astin, A. W. (1977). Four critical years. San Francisco: Jossey-Bass. Astin, A. W. (1993). What matters in college: Four critical years revisited. San Francisco: Jossey-Bass. Baxter- Magolda, M. B. (1992). Cocurricular Influences on College Students’ Intellectual Development. Journal of College Student Development, 33 (3). Biggs, D. A., Roth, J. D., & Strong, S. R. (1970). Self-made academic predictions and academic performance. Measurement and Evaluation in Guidance, 3 (1), 81-85. Child, D. (1969). A comparative study of personality, intelligence and social class in a technological university. British Journal of Educational Psychology, 39 (1), 40-47. Costa, P. Jr., and McCrae, R. R. (1991). The NEO Five Factor Inventory, Odessa, FL: Psychological Assessment Resources, Inc. Digman, J. M. & Takemoto-Chock, N. K. (1981). Factors in the natural language of personality: Re-analysis, comparison and interpretation of six major studies. Multivariate Behavioral Research, 16 (2), 149-70. 12 Entwistle, N. J., & Entwistle, D. (1970). The relationships between personality, study methods, and performance. British Journal of Educational Psychology, 40 (2), 132143. Entwistle, N. J. (1972). Personality and academic attainment. British Journal of Educational Psychology, 42 (2), 137-51. Facione, P. A., Sanchez, C. A., Facione, N.C. & Gainen. (1995). The disposition toward critical thinking. The Journal of General Education, 44 (1), 1-25. Hooker, K., Frazier, L. D., & Manahan, D. J. (1994). Personality and coping among caregivers of spouses with dementia. Gerontologist, 34 (3), 386-392. Jacobs, S. S. (1995). Technical characteristics and some correlates of the California Critical Thinking Skills Test, Forms A and B. Research in Higher Education, 36 (1), 89108. Kuh, G. D., Schuh, J. H., Whitt, E. J., Andreas, R. E., Lyons, J. W., Strange, C. C., Krehbiel, L. E., & MacKay, K. A. (1991). Involving colleges: Encouraging students learning and personal development through out-of-class experiences. San Francisco: Jossey-Bass. Kuh, G. D. (1995). The other curriculum: Out-of-class experiences associated with student learning and personal development. Journal of Higher Education, 66. 123-155. Mouw, J. T., & Khanna R. K. (1993). Prediction of academic success: A review of the literature and some recommendations. College Student Journal 27 (4), 328-336. Pace, C. R. (1984). The College Students Experiences Questionnaire, 3rd edition. Center for Postsecondary Education, Indiana University, Bloomington, Indiana. Pascarella, E. T. (1989). The development of critical thinking: Does college make a difference. Journal of College Student Development, 30 (1), 19-26. Pascarella, E. T., Whitt, E. J., Edison, M., Nora, A., Hagedorn, L.S., Yeager, P. M., & Terenzini, P. T. (1996). What have we learned from the first year of the national study of student learning? Journal of College Student Development, 37 (2), 182-192. Pascarella, E. T., Whitt, E. J., Edison, M., Nora, A., Hagedorn, L.S., Yeager, P. M., & Terenzini, P. T. (1997). Women’s perceptions of a “chilly climate” and their cognitive outcomes during the first year in college. Journal of College Student Development, 38 (2), 109-124. 13 Terenzini, P. T. Springer, L., Pascarella, E. T., & Nora, A. (1995). Influences affecting the development of students' critical thinking skills. Research-in-HigherEducation; 36 (1), 23-39. Ting, S. R., & Robinson, T. L. (1998). First-year academic success: A prediction combining cognitive and psychosocial variables for Caucasian and African American students. Journal of College Student Development, 39 (6), 599-610. Tross, S. A., Harper, J. P, Osher, L. W. & Kneidinger, L. M. (2000). Not just the usual cast of characteristics: Using personality to predict college performance and retention. Journal of College Student Development, 41 (3), 323-334. Watson, G. B., & Glaser, E. M. (1994). The Watson-Glaser Critical Thinking Appraisal. Form S. San Antonio, TX: Psychological Corporation. 14 THREADING THE DEVELOPMENTAL MAZE: REMEDIAL PROGRAM COMPLEXITY AND STUDENT PROGRESS AT A LARGE, SUBURBAN COMMUNITY COLLEGE Karl Boughan Coordinator of Institutional Research Prince George's Community College Introduction The degree-conferring rates of four-year colleges and universities typically and substantially outstrip those of two-year postsecondary institutions C at the national level, for example, by 65 to 23 percent respectively (Adelman, 2000). In this study, we posit a key role for remedial education in the formation of this A graduation gap@ . Specifically, we reason that high developmental program participation rates plus low program completion rates tend to produce inflated attrition rates, especially at schools where remediation program completion is a prerequisite for enrolling in most entry-level credit courses. That attrition may be the overt sort (early college exiting), but here we mostly had in mind sizable numbers of what may be called A stealth dropouts@ , continuing students who are remedial non-completers and therefore effectively precluded from the degree track course-taking. The national data fit the pattern in a general way: over two-fifths (41 percent) of all first-time freshmen entering public two-year schools in 1995 were enrolled in courses designed to remediate college skills deficits (National Center for Educational Statistics, 1996), only 43 percent of such developmental students completed all their program requirements, also, at mid-decade more than half of the country=s community colleges mandated in-coming student developmental education placement testing and had established enrollment procedures essentially limiting serious credit course-taking to those who had finish remediation or required none (McCabe, 2000). Hardly any research, however, has been specifically devoted to exploring the interplay between remedial education skills-credentializing function and academic outcomes from a process perspective. In fact, little research attention of any kind has been paid to working out the details of developmental education as a process. Instead, most developmental research has tended to concentrate on practical institutional case studies concerning the salutary impact of specific program reforms (see Ignash, 1997; Boylan, 2000), although one does run across the occasional report on correlations between student degree progress and developmental program participation conceived mostly as an undifferentiated phenomenon (for example, Brophy, 1984; Keller and Williams-Randall, 1998; Yang, 2000; Zhao, 1999). 15 This study, in a small way, seeks to advance the understanding of developmental education as a process C a goal-organized dynamic of academic policies and instructional operations C capable of exerting an influence on academic outcomes comparable to the impacts of factors such as scholastic ability, and academic and social environments. This we hoped to accomplish demonstrating how the complex nature of the developmental program at one fairly representative community college systematically interacted with remedial student decisions and behaviors to limit access to degree programs. Institutional Setting and Developmental Program Characteristics Prince George=s Community College is a public, two-year postsecondary education provider in the Maryland suburbs of the District of Columbia, with a fiscal year credit enrollment averaging around 15,000 students. Its institutional performance in terms of state standard assessment indicators falls within the normal range for its peer group, and it is also unexceptional for a school of its type in the socio-economic composition of its student body, except for a very high concentration of African American attenders (70 percent). It is fairly representative, as well, of state community colleges in the size and program area distribution of its remedial student enrollment, and in the form and functioning of its developmental education process (Maryland Higher Education Commission, 1996). All incoming credit course students are expected to undergo the full battery of remediation placement tests (DTLS/MS for three basic skills areas C English composition, reading comprehension and high school-level mathematics), or to seek and obtain formal exemptions based on prior college work (transfer students), scores received on national education tests (SAT or ACT), or past fulfillment of special preparatory programs (for example, a pre-registration intensive algebra review course). Students evading one or all remedial assessment are not formally prohibited from attempting credit enrollment, but will find this quite difficult in practice, lacking proof of basic skills proficiency that is a prerequisite for taking most entry-level credit courses. Area program courses fall into low tier-high tier sequences (with an intermediate tier in math), based on the number and type of skills deficiencies identified during the placement testing. Remedial students are placed into the appropriate tier given their test scores, and if placed into a lower tier must work their way up, with retrograde motion also a possibility under some conditions. Students with single area requirements who place into the top tier have a total developmental A course burden@ of 1; those requiring the most intensive level of remediation in all three areas start out with a minimum course burden of 7. Developmental courses may be repeated only once, a non-advancing the second time around constituting formal program failure (although a peremptory course withdrawal is possible up to semester midpoint). For the most deficient, the course burden may reach 14. 16 Only institutional CEUs are awarded for passed developmental courses, but there exists no bar to students taking credit courses simultaneously with remedial ones, provided they meet course prerequisites, including those relating to basic skill proficiency. This means that students with yet-unremediated deficiencies in one area but not another are perfectly free to take credit courses which lack skills prerequisites of the first kind. Although students are recommended to finish their remedial studies early, no remediation schedule is mandated and they are free to enroll in developmental courses at any time or in any area order they choose. In fact, as in the case of test avoidance, failure to begin area programs on a timely basis or at all, or for that matter to complete any begun, does not preclude credit enrollment, subject to the usual course prerequisite caveats. The degree track is never denied to incompletely remediated students by formal prohibition. What effectively bars them is the way the system of credit course prerequisites works at registration. All entry courses to the general education program that students must be completed before graduating, and most degree program entry-level courses, as already noted, require proof of proficiency. Methodological Considerations It was important to go into such detail concerning developmental program procedure at PGCC because that is where the Devil is and where we had to start from in designing a study which, after all, puts procedural complexity at the center of research. In preparation for our work, a massive developmental program file was assembled based 1995-2000 student transcript data, covering all the aspects of remediation procedure at PGCC just reviewed, plus student development program placements, decisions on program options, course behaviors, program outcomes and overall academic outcomes. The methodological approach adopted was longitudinal analysis, in this case of the cohort of all 1996 fall-entering first-time credit students (N=2,094). Cohort Fall-96 was the first for which developmental data was 100 percent complete and verified, and was also the first to feel the effects of the College=s new computer-driven course lock-out system designed to eliminate course prerequisite violations at registration. Using this cohort would also allow a sufficient time span (four years) for development and overall academic outcomes to become manifest. The next step was the choice of a developmental status measurement method appropriate to our research aims. The candidates were the conventional method, used in most institutional reporting, which sorts students solely on the basis of actual developmental placement testing and course-taking results, versus a new approach we call the degree-track method, keyed to ability of students to meet the skills proficiency prerequisites of degree-relevant credit courses. Basically, the former is more data-audit defensible but, as Table 1 below shows, tends to distort the meaning and inflate the size of the A not required@ and A required/completed@ categories by including within them students who skipped one or two placement tests and therefore who lack credentials in some skill areas. 17 Table 1. Two Remedial Status Measurement Methods (Cohort Row Percentages) Method NOT REQ REQ REQ/CMP REQ/INC NO DATA Conventional 34.6a 58.5b 14.4c 44.1d 6.8e Degree-Track 25.0f 75.0g 13.1c 61.9h a. no REQ any testing b. Any tested REQ c. all REQ/CMP d. Any REQ/INC e. Untested f. No REQ/all 3 tests g. any REQ/all 3 tests, or test miss h. Any REQ/INC or any test miss The latter, however, defines developmental non-completion as the inability to meet credit course prerequisites for basic skills proficiency, either because a student has failed to pass a required remediation program, or because one or more of his skills remains unassessed (missed placement tests). Accordingly, only students tested in all three skills area can sort into non-developmental or completed remediation categories, which is equivalent to placing them in a more general degree track category. Since the focus of our study was the linkage between the developmental process and access to the degree track, the second remedial status measurement method was obviously superior for our purposes and was used in all subsequent analysis. Table 2. 1996 Cohort 4-Year Developmental and Academic Outcomes (Percents) DT Conv NonDT Conv DT Conv No Selected Outcome Cohort Dev. CMP CMP INC INC Tests Transfer Only 8.2 17.0 9.1 14.3 4.4 4.9 2.1 Both Deg./Transfer .9 3.4 .4 .0 .0 .0 .0 .0 .0 .0 .0 Degree Only 4.6 10.1 15.7 Pre-Grad (45-59 Hrs) 9.0 12.0 16.1 39.3 5.6 7.9 .7 Soph (30-44 Hours) 9.3 11.9 17.1 7.1 13.7 19.1 4.2 Frosh (1-29 Hours) 48.5 38.6 37.6 39.3 53.5 52.3 62.2 No Credits Earned 19.5 6.9 4.0 .0 22.8 15.8 30.8 Column N 2,094 523 274 28 822 304 143 Row 100.0 25.0 13.1 1.3 39.3 14.5 6.8 DT=by degree-track method (all tests taken) Conv=conventional (hidden test skipping) Finally, before proceeding to the analytic phase of our study, we thought it prudent to make a reality check on the effectiveness of the college=enrollment prerequisite structure in blocking skills-unproven students from the degree track. Table 2 (above) presents the results of that trial C a cross-tabulation of a variable dividing developmental status categories into degree track and non-degree track (conventionally defined) sub-types, 18 with selected four-year academic outcomes, including, most crucially, degree attainment. If a tight linkage exists between unproven skills proficiency and minimization of credit enrollments, then the percentages for table cells (deep shaded) representing degree attainment levels among off-track developmental status students should be zero or near to it, which proves to be the case. Even test-skipping A completers@ , the most likely to break the pattern, collectively failed to earn a single associate degree or occupational certificate after four years of trying, while16 percent of the true completers (all skills assessed/all program requirements fulfilled) managed to graduate. Interestingly, however, off-track completers accomplished more transfers, the single standard form of academic success procedurally open to them, than any other group. They also tended, appropriately, to have the largest numbers backed up in the near-graduation slot (39 percent). The only A degreeless@ degree-track category (light shaded award results) properly turned out to be the all-test incompletes. Developmental Program Area Findings Our analytic work began with an exploration of what happened to cohort developmental students in each of the three skills remediation programs. The full range of the exhibited decisions, behaviors and outcomes of the sub-cohort within each remedial area was examined for patterns, leading to development of three master A area career path@ variables. Each variable category represented a discreet career path through (or around) the remedial area program. Appropriate to the complex possibilities latent in the dynamics of postsecondary remedial education, these variables expressed a very large number of realized career paths: 28 paths in the cases of developmental English (DVE) and reading (DVR), and 47 in the case of developmental math (DVM). Table 3 (above) presents three manageably condensed versions of these variables along with their allcohort, area-required student and area course-taking student percentage distributions. Table 3. Comparative Developmental Program Main Effects (Percentages) DVE DVR CRS REQ ALL CRS REQ ALL 419 824 2094 475 968 2094 Not Required 53.8 60.7 Required 46.2 100.0 39.4 100.0 Completed 16.0 34.5 68.0 12.5 31.7 59.2 15.1 32.7 66.7 11.5 29.2 57.5 P Course Pass .6 1.3 .2 .8 2.1 1.0 P Re-Test Out .2 .5 1.1 .1 .4 .7 P Late Start Incompletes 30.2 65.5 32.0 26.9 68.3 40.8 15.3 33.1 13.7 34.8 P Unassessed 7.7 16.6 5.0 12.7 P No Courses 2.2 4.7 9.9 2.4 5.8 11.9 P NP Grade1* .2 .6 .8 .2 .9 1.2 P NP Grade2* 3.8 8.2 16.6 2.9 7.4 14.6 P Dropout/W .9 1.9 3.8 2.5 6.4 12.6 P Dropout/P .2 .4 .8 .1 .2 .5 P Late Start *NP Grade1=Non-passing/1st Attempt; NP Grade2 = Non-passing/2nd Attempt 19 DVM ALL 2094 34.5 65.5 11.7 10.2 1.2 .4 53.7 16.0 17.8 7.1 1.8 7.8 2.7 .6 REQ 1,371 100.0 17.9 15.5 1.8 .6 82.1 24.2 27.2 9.9 3.7 11.9 4.26 .9 CRS 642 34.9 32.2 .5 1.2 65.1 23.1 5.9 25.4 8.9 1.9 The table is rich in findings and policy implications, but given limited room for explication, we will focus on those which bear directly on the central concern of this paper C student degree track preclusion. This means looking mostly at the remediation incomplete categories. Inevitably, students who fell into any of these would be unable to meet the skill proficiency meet of at least one key credit course need for degree program advancement. Any incomplete career path cohort percentages ranged from a substantial 27 percent (DVR) to a majority 54 percent (DVM). Such high area rates were unsurprising, considering the overall remediation incomplete figure we saw earlier (62 percent). But what was unexpected was how relatively little program incompleteness was attributable to poor student developmental course performance. For example, only 14 percent of the students needing skills credentials in math (DVM/REQ) failed to complete remediation due to non-passing DVM course grades in their last course (first and second attempt combined). In fact, absence of DVM course enrollments (program avoidance) was twice as big a problem (27 percent) and the single most important reason what students tended to lack math skills credentials, followed by failure to take the DVE placement exam in the first place (assessment avoidance C 24 percent). Among DVM course-takers, poor grades did account for largest share of non-completers (29 percent), but final course withdrawal (program dropping) was almost as telling a factor (25 percent), and even stopping program after receiving an advancing grade was a discernable trend (9 percent). For that matter, most course grade-related incompletes could be considered decisional rather than behavioral. For the definitional purest, only formal program failure (non-passing grade in the final course repeat attempt) would constitute genuine flunking of the program (just 6 percent of DVE course-takers); the others with non-passing grades but not making a second attempt to pass (23 percent) could be said to have opted to stop their programs before completion. Overall Remedial Effects The next step in the research plan called for investigating general remedial process effects on student progress, a tricky proposition since this involves somehow summarizing the degree track effects found in Table 3 across all three remedial areas. We took two related tacks to achieve this, the results of both shown in Table 4 below. In the first, we reconstituted the three developmental area career path variable of Table 3 into a set of discrete dummy variables representing any instance in a student=s career across all three developmental areas of a particular (e.g., any incidence of program evasion). Such indicators collectively examined provide a useful level of insight into cross-area relative importance of types of student decisions and behaviors for spoiling access to degreeculminating credit courses, overlapping case membership (a single student may exhibit up to three different any-instance development paths) blurs interpretation. The second and more potent approach was to trick the many any-instance indicators into a single multi-category measure we call the Preclusion Cascade. The trick was accomplished by the employment of A trumping@ rules. The Cascade assigns a degree-track precluded student to one and only one preclusion incident category, according to chronological or logical precedence. To illustrate, an occurrence of area assessment avoidance in a 20 student=s overall development career A trumps@ any manifestation of program evasion or termination in other areas since it comes at the very start of that career. Similarly, a student who may have both withdrawn from a final course in one area and formally failed his final course in another would be assigned to the latter, unavoidable category. This procedure not only eliminates messy case overlaps but produces a variable of the discrete preclusion incidence categories which arrange themselves naturally by intervention priorities. Table 4's Preclusion Cascade percentage distributions tend strongly to underline our main individual area finding C that incomplete basic skills remediation is primarily a function of non-scholastic factors. The pattern is very clear: 23 percent of the cohort (37 percent of development incomplete students) left the degree track at the placement testing point of the remedial process by avoiding a skills assessment, another cohort 16 percent (26 percent of the non-completers) departed at the program start point by failing to enroll in required area courses. Table 4. Cross-Developmental Program Main Effects (Percentages) Any Instance The Preclusion Overlapping A Cascade@@ Categories % of Sample Column % Cohort Remedial Cohort Incomplete (2,094) (1,571) (2,094) (1,297) Non-Developmental Developmental Required Completed All Non-Completers 25.0 75.0 13.1 61.9 100.0 17.4 82.6 Incomplete (1,297)* Total Assessment Avoidance 6.8 11.0 1 or 2 Skipped Tests 15.9 25.6 Program Evasion (No Courses) 16.5 26.7 Formal Failure (2nd Attempt NP) 1.3 2.2 st 1 Non-Pass Grade/Drop 7.7 12.5 Course Withdrawal/Drop 7.1 11.4 Advancing Grade/Stop 2.5 4.1 *@ Incomplete@ sub-column percentages sum to 100. 6.8 15.9 19.7 1.4 12.5 11.0 5.5 11.0 25.6 31.8 2.3 20.2 17.8 8.9 Thus, almost two fifth of cohort members (over three-fifths of non-completer) lost all chance of graduating in a way that had nothing to do with what happens in developmental classrooms. The remainder of the non-completers had undergone all three skills assessments and entered all of their required programs, but many spoiled their graduation chances by failing to repeat a course in which they earned a non-passing grade (cohort 7 21 percent, non-completers 12 percent) and similar proportions effectively withdrew from the degree track by withdrawing from a last developmental course. Given all of the above, only 1 percent of the cohort (2 percent of non-completers) went off-track exclusively because of failed scholastic effort. Finally, at some point it occurred to us that much of this seemingly rampant developmental avoidance and withdrawal behavior might be spurious, an artifact of simple first term college attrition. So, we re-constructed the Preclusion Cascade to include a Term1 dropout effect. The result was that first term attrition jump straight to the top of the list of developmental non-completion explanators (cohort 20 percent, noncompleters 22 percent). However, although developmental avoidance and withdrawal effects were attenuated, they remained robust. For example, the proportion of students failing to complete due to program evasion dropped from 20 percent of the cohort to still strong 10 percent. Thus, we were able to conclude that C yes C first term attrition was an important source of remediation non-completion in our cohort C but, no C it far from explained away the sort of developmental evasion and retreat we were discovering. Student Developmental Career Clusters In the final phase of our research, we decided to drop the narrow focus on degree track preclusion. We wanted to search more freely for cohort developmental career patterns, using a broader set of remedial behavior variables and a methodology capable of bringing coherence to a wider range of cross-area remediation effects. Furthermore, this time we wished gain insight into what makes for successful as well as unsuccessful careers. Our research plan called for a k-means cluster analysis of cohort student requiring remediation in at least one developmental area (n=1,094), using data representing not only their cross-area career paths (dummy variables derived from the three master developmental career variables), but also remediation need (e.g., number of required programs, level of program placement), program effort (e.g., number of courses taken per courses required, incidents of course repeats, major term duration of course-taking), and first term history (e.g., extent of credit course-taking and credit earning, first term attrition). The cluster analysis was stratified by degree of remedial success (all required programs completed, some completed, and none completed) for maximum clarity in the discernment of developmental career patterns tending toward positive or negative outcomes. Different analysis solutions within each outcome stratum (between two and five clusters) were generated and examined for levels of homogeneity and interpretability. The solutions finally accepted yielded a cross-strata set of nine clusters, which in Table 511 are named and briefly described by key statistics used in their derivation. 11 Table 5 referenced in this article may be obtained by contacting the author. 22 Space prohibits an elaboration of the nature of each individual cluster, but the following general observations bearing on developmental careers and remediation success can be made: Full Completion Clusters. Two clusters emerged in this stratum, one composed mostly of students needing only, and easily completing, brush-up in single skill areas, and another characterized by multi-skills deficiencies who had only moderate trouble with their DVE and DVR programs, but had to fight their way to victory in DVM. Although the Brush-Up cluster was twice as populous as the Math+ Champion group, it was somewhat mollifying to discover that program success at PGCC was not completely confined to those developmental students least needing remediation. Furthermore, an analysis of the academic outcome for these two clusters revealed similar above-average levels of degree and transfer attainment, suggesting that the oft-observed phenomenon of the remediated super-student includes the marginally skills-deficient but the hard cases as well. Partial Completion Clusters. These three are the A heartbreak@ clusters, so near to and yet so far from entering the degree track. Both the Multiple Area Strugglers and All-ButMath Fighters made major efforts to overcome cross-area skills deficits but ultimately fell short, the latter held back from victory only by mathematical inaptitude. The third and majority cluster in this stratum also was balked the mainly in math remediation, but here a sort of failure of nerve rather than defeat in battle seemed to be involved The Math Dodgers manifested high levels of math program evasion, and DVM programs that were start usually terminated in course withdrawal. No Completion Clusters. In this largest of strata (including over half of all program non-completers), four clusters emerged. Two small groups (the Math Defeated and 3R Lost Causers) battled heroically but to zero effect. Energetic effort by the former could not overcome a single area deficiency in math, and the latter, most cross-area deficient of any cluster, suffered a general rout. But dominating the stratum were two large clusters whose defeat was mostly self-inflicted. The large plurality (near 40 percent) of the multiply skills deficient 3R Dropouts engaged in program evasion and the remainder tended to terminate their programs after a single course. The final product of the stratum III cluster sort was the Math Dodger group, the most populous of any and embracing three out of ten of all developmental non-completers in the cohort. These here turned out to be math-only remedial students predominantly. Even so, the prospect of undergoing remediation in just this one area proved to daunting for them, 70 percent dodging all DVM course-taking. Perhaps a quick review of the lessons taught by these cluster analysis patterns may also stand as a summary of the key findings of the entire study. If the way Prince George=s Community College=s remedial education process functions is any example, then: 23 $ $ The complexity of the remedial process allows for the emergence of many developmental career types and multiple paths to both happy and unhappy conclusions. This suggests that for maximum effectiveness in remediation, developmental programs should to be particularized to fit the diverse needs, abilities and prospects of each type. Motivation matters, but is not decisive. For the multiply skill deficient, program effort and dedication is a necessary but not sufficient condition for a successful remedial career. Several developmental career types struggle heartily but futilely toward the completion of their remediation. These should be the prime targets of student support services generally and special intervention programs in particular.. $ Poor developmental career decision-making accounts for more remediation noncompletion than does poor developmental course performance. The maze-like complexity of the remediation process includes many dead-end corridors off the marked path, and the indirect nature of degree track preclusion by credit course skill prerequisite encourage the false impression among students that developmental evasion and withdrawal are viable academic options. $ The developmental maze proves very difficult for student to thread properly. Advisement services should be appropriately enlarged and energized to mitigate the reluctance, confusion, frustration and panicked search for short-cuts that developmental education, like physical labyrinths, inherently foster. $ Last but far from least, the specific character of the remediation process at a postsecondary institution may make its own, independent mark on that school=s academic outcomes. In PGCC=s case, the remediation process functioned to effectively shaved the actually graduateable student body to about a third of the numbers of degree-seekers appearing in its institutional reporting. That is power indeed. References Adelman, C. (2000). Are We Still the Way We Were?: Describing Paths of Community College Students. Paper presented at the Annual Forum of the Association for Institutional Research, Cincinnati, OH, May 2000. Boylan, H., Saxon, D. (2000b). What Works in Remediation: Lessons from 30 Years of Research. Unpublished paper prepared for the League for Innovation in the Community College, Mission Viejo, CA. Brophy, D. (1984). Relationship between Student Participation in Student Developmental Activities and Rate of Retention in a Rural Community College. Report of Administrative Services and Research, Sierra Joint College District, Rocklin, CA, 1984. 24 Ignash, J., ed. (1997). Implementing Effective Policies for Remedial and Developmental Education. New Directions for Community Colleges, no. 100 (Winter 1997). Keller, M., Williams-Randall, M. (1998). Relationship between Student Success in College and Assessment for Remedial Assistance. Paper presented at the Annual Forum of the North East Association for Institutional Research, Philadelphia, November 1998. McCabe, R. (2000). No One to Waste: A Report to Public Decision-Makers and Community College Leaders. Washington, DC: American Association of Community Colleges, Community College Press. Maryland Higher Education Commission (1996). A Study of Remedial Education at Maryland Public Campuses (1996). Annapolis, MD, May 1996. National Center for Education Statistics (1996). Remedial Education at Higher Educational Institutions in Fall 1995. NCES 97-584. Washington, DC: U.S. Department of Education, Office of Educational Research and Improvement. Yang, F. (2000). Using Survival Analysis to Analyze and Predict Students= Achievement from their Status of Developmental Study. Paper presented at the Annual Forum of the Association for Institutional Research, Cincinnati, OH, November 2000. Zhao, J. (1999) Factors Affecting Academic Outcomes of Underprepared Community College Students. Paper presented at the Annual Forum of the Association for Institutional Research, Seattle, WA, May-June 1999. 25 26 STUDENT SELF-PERCEIVED GAIN SCALES AS THE OUTCOME MEASURES OF COLLEGIATE EXPERIENCE David X. Cheng Assistant Dean for Research and Planning Columbia University The growth of the outcomes assessment movement in higher education has been dramatic in the past decade. In the public sector, colleges and universities have come under increasing pressure from their constituencies to demonstrate their accountability, effectiveness, and efficiency in measurable terms. As a result, many institutions, especially public college/university systems, have adopted some kind of performance indicator systems with simple and quantifiable measures (Borden & Banta, 1994; Cheng & Voorhees, 1996). In the private sector, though many institutions, especially the elite ones, still enjoy the favorable ratings by US News and World Report and other agencies using “reputational” and “resources” approaches (Jacobi, Astin, and Ayala, 1987), the general sense of crisis is deepening. The public, students, and their parents demand to know whether private, elite institutions are delivering what they promised, and whether they are doing so in a cost-effective, high-quality way (Upcraft & Schuh, 1996, p. 8). While these myriad pressures have prompted college administrators to scramble for assessment models that fit their own institutions, there is also mounting evidence showing that the fundamental question of outcomes assessment, i.e., What is to be assessed? is often overlooked. The list of performance indicators compiled by Bottrill and Borden (1994) from various sources reveals the general tendency of institutions in moving toward a system of indicators that are quantifiable, easy to capture, and usually having the appearance of objectivity. Student test scores on aptitude, GPA’s, retention/persistence/graduation rates, etc., are among the most popular indicators adopted. While all these indicators do indeed measure certain aspects of an institution’s effectiveness, the biggest drawback, however, lies in their inability to provide meaningful information on students’ intellectual and personal development as the outcomes of their collegiate experience. Consequently, institutions adopting performance indicators typically find it difficult to include any indicators that can reliably measure the less tangible aspects of students’ collegiate experience. Literature Review In their 1987 ASHE-ERIC Higher Education Report Jacobi, Astin, and Ayala (1987) proposed an alternative conception of “talent development” to counter the popular definitions of excellence using the reputational and resource approaches. Jacobi, Astin, and Ayala (1987) believe that “a high quality institution is one that maximizes the intellectual and personal development of its students” (p. iv). 27 This report was among a considerable number of studies carried out to explore different taxonomies of the outcomes of college. Other influential studies include: Astin, 1973; Brown & DeCoster, 1982; Chickering & Gamson, 1987; Ewell, 1984, 1985a, 1985b, 1988; Hanson, 1982; Kur, Pace & Vesper, 1997; Kur, Hu & Vesper, 2000; Lenning, Lee, Micek, & Service, 1977; and Pascarella and Terenzini (1991). The importance of the research in this area, according to Jacobi, Astin, and Ayala (1987), is to provide a useful “menu from which researchers and practitioners may select the items of greatest importance to measure and track” (p. 19). Of the frequently cited typologies, Astin’s (1974, 1977) provides a three-dimensional taxonomic system: by type of outcomes: cognitive vs. affective; by type of data: psychological vs. behavioral; and by time: short-term vs. long-term. To a large extent Astin’s taxonomy is more of a framework for outcomes than actual outcome categories, as they are the case in Lenning (1977, 1980) and Bowen (1980). Mentkowski & Doherty’s (1983) typology is more practically-oriented, developed by faculty and administrators at Alverno College to implement an outcome-centered liberal arts program. In the national scene, a number of attempts have been made in recent years to convert students’ behaviors, cognitions, and attitudes enhanced through collegiate experiences into outcome indicators (National Center for Education Statistics (NCES), 1991; National Education Goals Panel, 1992; National Center for Higher Education Management Systems (NCHEMS), 1994). It is of no surprise that researchers or research groups differ considerably among themselves in their developed categories or taxonomies of outcome measures. However, common to most of these attempts is that the assessment of student behaviors, cognitions, and attitudes has to rely heavily on subjective measures using student self-perceived intellectual, social, and personal gains. “For some outcomes, student reports may be the only source of useful data” (Kur, Pace & Vesper, 1997). The College Student Experience Questionnaire (CSEQ) (Pace, 1979) and the College Student Survey (Higher Education Research Institute, 1989) are among the most widely used survey instruments that include items of student self-reported gains in college. The results of research using student self-reports of growth are in general consistent with research using other measures of collegiate achievement (Anaya, 1999; Pace, 1985; Pike, 1995). Research Questions In the ideal world of assessment, an institution is supposed to go through a cycle from setting missions, goals, and objectives, to developing instruments to assess the effectiveness of institutional performance as related to the goals, and finally to making improvements using the assessment results (Moxley, 1999). However, in the real world, few institutions find themselves completing such a perfect cycle due to all kinds of constraints. For instance, limited by time, expertise, and the lengthy testing cycle, an institution can hardly afford to locally develop a valid and reliable instrument that 28 assesses exactly what the institutional goals or missions call for. Therefore, a common alternative is to adopt a commercial survey instrument or to join a research consortium and use a consortium-developed survey instrument. Once an institution adopts such an externally-developed survey instrument to assess student collegiate experience, it’s not unusual that they find themselves caught in a dilemma: on the one hand, they have all these wonderful theories, taxonomies, or typologies that they want to use to assess their students’ collegiate experience; on the other hand, the survey instrument they adopt is either not specific enough to address certain unique institutional experiences, or it simply contains too many items which not only blur the focus of institutional assessment goals but also make the results hard to interpret. With all the existing outcome taxonomies as the research framework, the purpose of this study, therefore, is two fold: 1) to analyze an array of questions on student selfperceived gains in college using an externally-developed survey instrument, aiming at developing several comprehensive Student Self-Perceived Gain Scales (SSGS) to support an institution’s assessment of student collegiate experience, and 2) to test the utility of the developed SPEGs and their association with various characteristics of a student body in a private, highly selective institutional environment. Methods The data used in this study is from a senior survey of graduating classes of 1997, 1998, and 1999 at a private, urban, and highly selective research university. Because the institution requires that the graduating seniors complete the survey before picking up their graduation tickets, the response rates were close to 100%. The total number of cases included in 1997, 1998, and 1999 files is 1,057, 1,104, and 1,103 respectively. The respondents were graduates of two undergraduate colleges: the college of arts and sciences (A&S) and the college of engineering (ENGR). The survey instrument was designed by a consortium of highly selective institutions to assess different aspects of their students’ experience in college, and the questions range from graduates’ future plans, evaluation of undergraduate experience, financing of undergraduate education, college activities, and demographic background. There are twenty-four questions in the survey asking about students’ self-perceived gains. An exploratory factor analysis of the twenty-four items concerning student selfperceived gains was conducted using the 1997 survey data. Principal component analysis with varimax rotation was utilized for interpretability. Since the purpose of the analysis was not data reduction but creation of meaningful scale variables using all the available data, no item was eliminated because of low factor loading. Based on the results of the factor analysis, composite scales were constructed and the same items were grouped for all three years’ data respectively. Existing taxonomies were used as the frame of 29 reference to discern the most meaningful scales in describing students’ self-perceived gains in college (SSPGs). Reliability analyses were then conducted for all the scale variables to determine the appropriateness of items used to for grouping. Correlation and alpha indexes of both scales and individual items were examined and compared across three years to check the consistency and stability of the developed scales. After the SSPGs are constructed, two sets of independent variables were extracted from the survey data to test the utility of SSPGs. The first set of variables includes student demographic characteristics: sex, ethnicity, citizenship, family income, and parents’ highest educational level. The second set has to do with three important aspects of student college experience: GPA, the major field of their degree, and the overall satisfaction level of their undergraduate experience (1=very dissatisfied; 5=very satisfied). The tests of utility of developed SSPGs followed a two-step process. First, with each SSPG considered separately, multiple regression procedures were performed to discern the associations between independent variables and each SSPG. Second, with all the SSPGs considered simultaneously, multivariate analysis of variance (MANOVA) procedures were conducted to examine the differences of the two colleges (A&S and ENGR) and three levels of satisfaction (1=dissatisfied; 2=ambivalent; 3=satisfied) as independent variables on the five SSPG scales. The rationale behind these tests were: 1) a SSPG is a good measure of student gains if it displays some level of consistency in the way it interacts with independent variables across different years’ data; and 2) the SSPGs are good measures of student gains if it has disparate impact on students who were affiliated with different colleges and reported different levels of satisfaction with their college experience. It should be noted that these procedures were used for multiple purposes, not simply statistical inference. As a matter of fact, since the entire populations of the three classes were used for the analyses, statistical inferences are barely necessary. The inferential results would make sense when the data were supposed to constitute a random sample. In research practice, nonetheless, tests of significance were often used to analyze nonrandom data, with the results pointing to the presence of a relatively considerable effect. The inferential results included in this study should only be interpreted in such a manner (Chen, 1998; Chen & Cheng, 1999). Results Table 1 shows the rotated factor structure of the five-factor solution. A content analysis yielded the following grouping of the scale variables: 1) Practical competence (Bowen’s (1980) term); 2) Human characteristics (Lenning’s (1977, 1980) term); 3) Leadership competence; 4) Academic ability; and 5) Foreign language skills. Note that 30 the only items with factor loading lower that 0.5 are “Function independently, without supervision” in factor 1 and “Understand myself: abilities, interests, limitations, personality” in factor 2. These two items were nonetheless retained for their meaningful contribution to the respective scales. The application of factor analysis results using the 1997 data to the data of following years yielded stable and consistent scale variables. The ranges of the alpha values from the reliability analyses of the data of 1997 to 1999 are: 0.85-0.87 for scale 1; 0.83-0.86, scale 2; 0.85-0.86, scale 3; 0.61-0.63, scale 4. Scale 5 is a single item. Table 2 is a summary of the results of multiple regression analyses conducted to examine the associations of independent variables with each of the five self-perceived gain scales. Apparently, the level of satisfaction with undergraduate education is the factor most closely associated with students’ self perceived gains in college. Student major also seems to play an important role in their self-perceptions. While natural science and engineering majors perceived having higher gains in academic ability than humanities and social science majors, engineering majors were less confident about their gains in human characteristics and foreign language skills than their counterparts in other majors. In general, the self-perceived gains of the graduates are less influenced by their demographic and socioeconomic background than by college-related variables. The MANOVA procedures for students’ college affiliation and satisfaction on the five SSPGs for all three years (Table 3) were statistically significant by the Wilks’ Lambda criteria (F=7.76, df=5/10, p<.01 for 1997; F=9.01, df=5/10, p<.01 for 1998; F=12.04, df=5/10, p<.01 for 1999). Inspection of the univariate F-ratios reveals statistically significant differences among the three satisfaction levels on four of the five SSPGs, with the only exception on foreign language skills for the 1997 and 1999 models. Graduates of the two colleges also show statistically different self-perceptions on four of the five SSPGs, with the exception on practical competence. However, none of the college/satisfaction interactions is statistically significant. Further analyses of means broken down by college and satisfaction level confirmed that, despite the differences in level of satisfaction, students from both colleges show the same pattern of selfperceptions on all the five SSPGs: the higher the satisfaction level, the better they felt about their gains in the five areas. One noteworthy pattern emerges from examination of both the regression and the MANOVA results for all three years’ data: the perceptions of students from these three cohorts were very consistent. For instance, females consistently showed higher selfperceived gains in human characteristics than their male counterparts (betas are .06, .09, and .09 for 1997, 1998, and 1999 in Table 2); humanities students tended to report higher gains in foreign language skills than those from other majors (betas are .07, .10, and .15 for 1997, 1998, and 1999 in Table 2); and no statistical significance existed between A&S and ENGR students in their self-perceived gains in practical competence. 31 Summary and Discussion The analyses of the twenty-four questions regarding student self-perceived gains from an externally-developed senior survey yielded five outcome scales: 1) Practical competence; 2) Human characteristics; 3) Leadership competence; 4) Academic ability; and 5) Foreign language skills. Analyses show that the level of student satisfaction with undergraduate education is closely associated with their self-perceived gains in college. Student major also seems to play an important role in their self-perceptions. In general, graduating seniors’ self-perceived gains are less influenced by their demographic and socioeconomic background than by college-related variables. Given the consistency of SSPGs over a three-year period and their disparate impact on students with different characteristics, we can comfortably conclude that the students’ perceptions of their collegiate experience in this particular institution are well represented in the five SSPGs derived from self reports. In the past decade the idea of assessing “how much students learn or improve or grow in school or in college, as well as how they stand at graduation” (Belcher, 1987) has been gaining momentum over the traditional “reputational” and “resources” approaches. This study is a demonstration of how this new approach can work even if an institution has already committed to using an externally-developed survey to assess student collegiate experience. The five student self-perceived gain scales (SSPGs) derived from the graduating senior survey have not only presented the student version of the outcome measures of their collegiate experience, but also are comprehensive and meaningful to an institution that has a long tradition of emphasizing the breadth of learning through general education and community services. The usefulness of this study is that any institution can follow the methodology demonstrated in this study and derive its own outcome measures of student collegiate experience using whatever student self-reports they have chosen. However, being able to form outcome measures does not necessarily mean that an institution has found the answer to the critical questions of what is excellence in higher education and how it can be attained and assessed. The lesson we learned in this study is that the process of searching for outcome measures itself is an institutional “soulsearching” process, in which the college community has to revisit and/or redefine its institutional missions and goals constantly. The fact that so many taxonomies can be used for assessing college outcomes clearly shows that there can be as many ways of defining excellence in higher education. The ultimate goal of student assessment, however, should be to use the results of the assessment to readjust the existing mission and goals, and thus to provide a better institutional environment for student learning and growth. 32 Table 1. Factor Analysis Results of Student Self-Perceived Gains. Items and Scales 1 Scale 1: Practical Competence Acquire new skills and knowledge on my own Think analytically and logically Formulate creative/original ideas and solutions Communicate well orally Write effectively Synthesize/integrate ideas and information Plan and execute complex projects Function independently, without supervision Scale 2: Human Characteristics Identify moral and ethical issues Place current problems in historical/cultural/philosophical perspective Appreciate art, literature, music, drama Develop awareness of social problems Acquire broad knowledge in the arts and sciences Understand myself: abilities, interests, limitations, personality Scale 3: Leadership Competence Function effectively as a member of a team Lead/supervise tasks and people Relate well to people of different races, nations, religions Develop self-esteem, self-confidence Establish a course of action to accomplish goals Evaluate and choose between alternative courses of action Scale 4: Academic Ability Use quantitative tools Understand role of science/technology in society Gain in-depth knowledge of a field Scale 5: Foreign language Skills 2 Factors 3 0.73 0.69 0.68 0.60 0.60 0.53 0.50 0.36 4 5 Responses are measured on a 4-point scale: 1=not at all; 2=a little; 3=moderately; 4=greatly. 0.70 0.70 0.70 0.69 0.69 0.47 0.76 0.73 0.64 0.57 0.51 0.50 0.69 0.65 0.50 0.96 33 Table 2. Regression Beta Weights for the 5 Scales with Student Characteristics. Practical Competence 1997 1998 1999 Human Characteristics 1997 1998 1999 Leadership Competence 1997 1998 1999 Academic Ability 1997 1998 1999 Foreign Language 1997 1998 1999 Sex Female (Male) Ethnicity Asian Black Hispanic White (Other) Citizenship US permanent resident Foreign (US citizen) Family Income Parent Highest Education Overall GPA Major Humanities Natural Science Soc Science Engineering Double Major (Other) Satisfaction 0.07 0.06 -0.07 0.09 0.07 0.07 0.06 0.09 0.09 0.07 0.07 0.07 0.09 0.11 -0.1 0.09 0.07 -0.11 -0.06 -0.1 0.1 0.07 0.07 -0.08 0.07 0.11 -0.06 0.12 0.08 -0.07 -0.07 0.7 0.35 -0.07 0.39 0.11 0.11 0.06 0.06 -0.17 -0.12 -0.14 0.42 0.3 0.34 0.35 0.15 0.18 0.22 0.17 0.19 0.2 R2 Note: All the beta weights listed in the table are significant at the .05 level (p<.05). 34 -0.1 -0.15 -0.06 -0.13 -0.18 -0.16 0.16 0.06 0.15 -0.13 -0.09 0.3 0.15 0.21 0.16 0.11 0.13 0.1 0.06 0.16 0.31 0.33 0.41 0.21 0.24 0.29 0.12 0.12 0.18 0.19 0.15 0.2 0.07 0.1 0.15 0.06 0.1 -0.23 -0.16 -0.14 -0.08 0.09 0.2 0.13 0.1 0.12 Table 3. Results of MANOVA Comparisons for Student Satisfaction and their College Affiliation on the SSPG’s. Practical Competence Human Characteristics Leadership Competence Academic Ability Foreign Language Model: 1997 Overall1 College Satisfaction College*Satisfaction 24.41* 0.03 31.75* 1.24 28.03* 37.69* 24.08* 0.01 19.44* 10.55* 29.71* 0.60 24.53* 58.87* 13.73* 0.88 12.84* 47.85* 2.52 0.77 Model: 1998 Overall2 College Satisfaction College*Satisfaction 27.44* 0.21 43.01* 2.01 28.83* 8.82* 23.56* 2.66 17.30* 4.04 23.34* 0.69 16.55* 11.39* 14.65* 1.32 10.38* 9.24* 6.94* 0.32 MODEL: 1999 OVERALL3 College Satisfaction College*Satisfaction 38.25* 0.18 49.25* 1.25 35.43* 18.88* 39.52* 0.07 34.52* 13.64* 44.11* 0.90 27.10* 34.29* 24.78* 0.29 15.30* 15.25* 3.88 1.53 * p<.01. 1 Significant by the Wilks' Lambda criteria (F=7.76, df=5/10, p<.01). 2 Significant by the Wilks' Lambda criteria (F=9.01, df=5/10, p<.01). 3 Significant by the Wilks' Lambda criteria (F=12.04, df=5/10, p<.01). 35 References Anaya, G. (1999). College impact on student learning: Comparing the use of selfreported gains, standardized test scores, and college grades. Research in Higher Education 40(5): 499-526. Astin, A. W. (1973). Measurement and determinants of the outputs of higher education. In L. Solmon & P. Taubman (Eds.), Does College Matter? Some Evidence on the Impacts of Higher Education. New York: Academic Press. Astin, A. W. (1974). Measuring the outcomes of higher education. In H. R. Bowen (Ed.) Evaluating Institutions for Accountability (New Direction for Institutional Research, no. 1). San Francisco: Jossey-Bass. Astin, A. W. (1977). Four Critical Years: Effects of College on Beliefs, Attitudes, and Knowledge. San Francisco: Jossey-Bass. Astin, A. W. (1984). Excellence and equity: Achievable goals for American education. Phi Kappa Phi Journal, 64(2), 24-29. Belcher, M. J. (1987). Value-added assessment: College education and student growth. In D. Bray, & M. J. Belcher (eds.) Issues in Student Assessment (New Direction for Community Colleges, no. 59). San Francisco: Jossey-Bass. Boden, V. M. & Banta, T. W. (1994). Using Performance Indicators to Guide Strategic Decision Making (New Direction for Institutional Research, no. 82). San Francisco: Jossey-Bass. Bottrill, K. V. & Boden, V. M. (1994). Appendix: Example from the literature. In Boden, V. M. & Banta, T. W. (eds.) Using Performance Indicators to Guide Strategic Decision Making (New Direction for Institutional Research, no. 82). San Francisco: Jossey-Bass. Bowen, H. R. (1980). Investment in Learning. San Francisco: Jossey-Bass. Brown, R. & DeCoster, D. (1982). Mentoring-Transcript Systems for Promoting Student Growth. San Francisco: Jossey-Bass. Chen, S. (1998). Mastering Research: A Guide to the Methods of Social and Behavioral Sciences. Chicago: Nelson-Hall. Chen, S. & Cheng, D. X. (1999). Remedial Education and Grading: A Case Study Approach to Two Critical Issues in American Higher Education. A research report submitted to the Research Foundation of the City University of New York (PSC-CUNY Research Grant No. 669282). 36 Cheng, X., & Voorhees, R. (1996). Challenges in implementing core indicators of effectiveness for Colorado’s community colleges. Resources in Education, July. JC 960 169. Los Angeles, CA: ERIC Clearinghouse for Community Colleges. Chickering, A. W. & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin 39(7): 3-7. Ewell, P. (1984). The Self-Regarding Institution: Information for Excellence. Boulder, CO: National Center for Higher Education Management Systems. Ewell, P. (Ed.) (1985a). Assessing Education Outcomes (New Direction for Institutional Research, no. 47). San Francisco: Jossey-Bass. Ewell, P. (1985b). The value-added debate ... continued. American Association for Higher Education Bulletin, 38, 12-13. Hanson, G. (Ed.) (1982). Measuring Student Development (New Direction for Institutional Research, no. 20). San Francisco: Jossey-Bass. Higher Education Research Institute (1989). Follow-Up Survey. University of California, Los Angeles. Kur, G. D., Pace, C. R. & Vesper, N. (1997). The development of process indicators to estimate student gains associated with good practices in undergraduate education. Research in Higher Education 38(4) 435-454. Kur, G. D., Hu, S. & Vesper, N. (2000). “They shall be known by what they do”: An activities-based typology of college students. Journal of College Student Development 41(2) 228-244. Jacobi, M., Astin, A. W. & Ayala, F., Jr. (1987). College Student Outcomes Assessment: A Talent Development Perspective. ASHE-ERIC Higher Education Report No. 7. Washington, DC: Association for the Study of Higher Education. Lenning, O. T., Lee, Y., Micek, S., & Service, A. (1977). A Structure for the Outcomes of Postsecondary Education. Boulder, CO: National Center for Higher Education Management Systems. Lenning, O. T. (1980). Needs as a basis for academic program planning. In R. Heydinger (Ed.) Academic Planning for the 1980s, (New Direction for Institutional Research, no. 28). San Francisco: Jossey-Bass. Mentkowski, M. & Doherty, A. (1983). Careering after college: Establishing the validity of abilities learning in college for later careering and professional performance. Final report to NIE. ED 252 144. 37 Moxley, L. S. (1999). Student affairs research and evaluation: An inside view. In Malaney, G.D. (ed.) Student Affairs Research, Evaluation, and Assessment: Structures and Practice in an Era of Change (New Direction for Student Services, no. 85). San Francisco: Jossey-Bass. National Center for Education Statistics (1991). Education Counts: An Indicator System to Monitor the Nation’s Educational Health. Washington, DC: U.S. Government Printing Office. National Center for Higher Education Management Systems (1994). A Preliminary Study of the Feasibility and Utility for National Policy of Instructional “Good Practice” Indicators in Undergraduate Education. Boulder, CO: National Center for Higher Education Management Systems. National Education Goals Panel (1992). The National Education Goals Report: Building a Nation of Learners. Washington, DC: U.S. Government Printing Office. Pace, C. R. (1979). Measuring the Outcomes of College. San Francisco: JosseyBass. Pace, C. R. (1985). The Credibility of Student Self-Reports. Los Angeles: University of California, The Center for the Study of Evaluation, Graduate School of Education. Pascarella, E. T. & Terenzini, P. T. (1991). How College Affects Students: Findings and Insights from Twenty Years of Research. San Francisco: Jossey-Bass. Pike, G. R. (1995). The relationships between self reports of college experiences and achievement test scores. Research in Higher Education 36: 1-22. 38 INSTITUTIONAL RESEARCHERS: CHALLENGES, RESOURCES AND OPPORTUNITIES Anne Marie Delaney Director of Institutional Research Babson College Purpose. This paper presents the results of a study that investigated challenges institutional researchers encounter in their career; resources for coping with these challenges; and the impact of these challenges on job quality and on engagement in policy. The major research questions addressed in this study are: • What are the primary professional challenges institutional researchers encounter? • How do these challenges vary by level of position and use of resources? • To what extent do level of position, challenges and resources predict job quality? • What impact do challenges have on institutional researchers' engagement in policy? • How do job quality, level of position, challenges and resources predict involvement in policy? The goal of this research is not only to identify and understand the problems, but also to propose creative strategies to meet these challenges and thus enhance institutional researchers' professional status and effectiveness. In the context of this study, professional challenges encompass immediate concerns as well as difficulties experienced during the course of one's career. Three major areas addressed include: concerns about one's current job; difficulty in securing support for one's values and work; and pressure to compromise to meet career demands. Review of the Literature. During the last three decades, researchers have investigated the problems and challenges institutional researchers encounter in their professional practice. Gubasta (l976) defined problems facing college decision makers and increasing information needs of external agency representatives as sources of conflicting pressure on institutional researchers. Storrar (l981) identified role conflict to be a source of stress for institutional researchers. She found that while institutional researchers perceived their actual roles as high on political responsiveness and political advocacy, they preferred roles of policy advocacy and low political responsiveness. Sanford (l983) cited little extrinsic recognition for the work and the need to work with a number of other persons and offices without having direct control as primary sources of stress for institutional researchers. Huntington and Clagett (l991) reported insufficient staff; excessive workload; lack of access to quality information and decision-makers; and inadequate training of staff as problems most frequently experienced by institutional researchers. 39 Matier, Sidle and Hurst (l995) offer ideas for meeting such challenges. They recommend that institutional researchers exercise leadership in defining their work and expand their sphere of influence by assuming roles as information architects, change agents, and consultants of choice within their respective institutions. Hurst, Matier and Sidle (l998) also propose that institutional researchers serve as facilitators of the learning process as a way of enhancing the role of institutional research and that institutional research play a key role in promoting the success of teams to ensure that decisions are grounded in the support of institutional constituents. Such initiatives may strengthen institutional researchers' ability to meet the challenges of demanding workloads and expand the possibilities for decision-making influence and professional advancement. Data Source. Data for this study are based on results from a mailed survey sent to 304 institutional researchers in the Northeast; 221 returned completed surveys yielding a response rate of 73 percent. The respondent group reflects the demographic, educational and professional diversity of the institutional research profession. Of the 221 respondents, 41 percent are male and 59 percent are female; 40 percent possess a doctorate; 42 percent have a master’s degree; and 18 percent hold a bachelor’s degree. Respondents represent a range of professional positions. Eleven percent hold titles at the level of dean to vice-president; 50 percent are directors; 10 percent are associates; 16 percent are analysts, coordinators or mangers; and 13 percent are assistants or research and technical specialists. Analytical Techniques. Analyses were conducted with individual survey items and computed scales. The scales represent the following constructs: engagement in policy, job quality and professional challenges. Bivariate techniques - correlation, chi-square, t-test, and analysis of variance - examined the relationships between level of position, resources and challenges. Path analysis assessed the direct and indirect effects of level of position, resources and challenges on job quality and on engagement in policy. Scale Development. Factor analyses were conducted to establish construct validation, that is, to identify the unidimensional or multidimensional constructs underlying the items related to professional challenges, job quality, and engagement in policy. Common factor analysis or the principal axis factor method was employed. This method was chosen since it assumes that the factors are correlated. Results from factor analyses indicated which individual items were correlated with each other and what underlying dimensions were represented in the data. Factors were selected that explained a substantial amount of variance and included at least two or more items. Scales were then created by combining similar items into one measure. Generally, items with factor loadings of .5 or higher on a particular factor were chosen to be included in a scale. Prior to using the scales in the analysis, alpha reliability coefficients were computed to determine the internal consistency of the scales. Table 1 presents the names, statistical properties, and correlations among these scales. Items comprising these scales are presented in Appendix A. The reliability of these 40 scales is very high with coefficients ranging from .80 to .90. As reflected in the mean scale scores, the most prevalent challenge among institutional researchers involves experiencing overwhelming demands in their current jobs, followed by managing conflict between work and personal/family needs, coping with limited opportunity and dealing with threats to quality standards. The moderately high means on engagement in policy and job quality suggest that many institutional researchers are involved in policy and have a quality work experience. Table 1 A. Statistical Properties of the Scales Range of St. No. of Responses Mean Dev. Reliability Items Low-High Professional Challenge Scales a. Experiencing Overwhelming Demands b. Managing Conflict between Work and Family c. Coping with Limited Opportunity d. Dealing with Threats to Quality Standards 3.25 1.07 2.56 .95 2.48 1.00 2.07 .77 .87 .83 .89 .80 3 3 6 2 1-5 1-5 1-5 1-5 3.72 3.24 .86 .89 12 10 1-5 1-5 e f Work Experience Scales e. Job Quality f. Engagement in Policy .70 .82 B. Correlation among the Scales a a. Experiencing Overwhelming Demands b. Managing Conflict between Work and Family c. Coping with Limited Opportunity d. Dealing with Threats to Quality Standards e. Job Quality f. Engagement in Policy * p < .05; ** p < .01; *** p < .001 41 b c .56*** - d .17* 44*** -.47*** - .43*** .21** .25*** -.26*** -.23*** .61*** As shown in Table 1, correlation analyses results identified statistically significant correlations among some of the scales. A strong positive correlation exists between experiencing overwhelming demands and managing conflict between work and family. A moderate, significant correlation also exists between coping with limited opportunity and dealing with threats to quality standards. Engagement in policy is positively correlated with experiencing overwhelming demands and managing conflict between work and family. Job quality and engagement in policy are negatively correlated with coping with limited opportunity and dealing with threats to quality standards. Finally, a strong, positive correlation exists between job quality and engagement in policy. Results Frequency of Challenges This section on the nature and frequency of challenges among institutional researchers presents results from analyses based on individual survey items and computed scales. Concerns about Current Job. Figure 1 identifies the top six specific aspects of their current job that institutional researchers describe as 'very much' of a concern. As shown, three of these concerns relate to work demands - having too much to do, the job is taking too much out of you, and stressful demands of the job. The other two frequently reported concerns relate to career advancement: having little chance for advancement and limited options for career development. Figure 1 Institutional Researchers' Concerns about their Current Job Percent Reporting 'Very Much' 0% 10% 20% 30% 40% 39% Having too much to do 20% Having little chance for advancement 15% Limited options for career development 14% Job is taking too much out of you 13% Stressful demands of the job 10% Lack of Recognition 42 50% Challenges during Research Career. Figure 2 shows the percent who reported they experienced various challenges 'very much' during their career. These challenges refer to obtaining support for one's values and standards; securing resources to conduct the work; and obtaining support in resolving conflicts and ethical issues. As shown, 24 percent report that producing quality work within time constraints has been 'very much' of a challenge. Between 13 and 15 percent also report the following issues have been 'very much' of a challenge during their career: receiving credit for work; finding opportunities to be heard; and attaining support for professional standards. These data identify potentially serious issues as these challenges threaten institutional researchers' professional status, job quality, and potential for advancement. Figure 2 Institutional Researchers' Career Challenges Percent Reporting 'Very Much' 0% 5% 10% 15% 20% 25% 30% Producing quality work within time constraints 24% Receiving credit for the work you do 15% Finding opportunities to make your voice heard 14% Attaining support for your professional standards 13% Obtaining necessary resources 12% Receiving support for personal values 10% Gaining support for your work 10% Securing support with an ethical dilemma 6% Resolving conflicts with superiors 4% 43 Pressure to Compromise. Figure 3 identifies the top four compromises respondents indicated they 'frequently' or 'very often' felt they had to make for their career. As shown, institutional researchers most frequently cited pressures related to work demands and professional integrity. Some 25 percent cited working excessive overtime; 21 percent reported neglecting personal needs; 14 and 12 percent respectively reported allowing others to take credit for their work and performing work with inadequate training. Figure 3 Pressures to Compromise Experienced by Institutional Researchers 0% Percent Reporting 'Frequently' or 'Very Often' 5% 10% 15% 20% 25% Work excessive overtime 30% 25% Neglect personal needs 21% Allow others to take credit for your work 14% Perform work with inadequate training 12% Variation in Challenges Bivariate analyses were conducted to answer the question: How do professional challenges vary by level of position and use of resources? These analyses included t-tests, analysis of variance and the Student-Newman-Keuls post hoc test to determine where the significant differences occur among institutional researchers. These analyses were conducted with individual survey items and with computed scales. Level of Position and Professional Challenges. Results based on the individual survey items, revealed statistically significant differences between level of position and the following professional challenges that relate to work demands: the job is taking too much out of you (F = 3.28, p < .05); working excessive overtime (F = 6.08, p < .001); neglecting family responsibilities (F = 4.11, p < .01); and neglecting personal needs (F = 3.47, p < .01). Further, the Student-Newman-Keuls post-hoc test results indicated that these challenges were significantly higher among institutional researchers holding the highest level positions from dean to vice president. 44 Level of current position was also significantly related to minimal opportunity to use one's intelligence (F = 2.72, p < .05); job monotony or lack of variety (F = 3.41, p < .01; and pressure to lower one's standards (F = 2.59, p < .05). These challenges, which involve the intellectual quality and integrity of one's professional life, were generally highest among research analysts and associates. Scale level analyses revealed statistically significant differences between level of position and two challenge scales: experiencing overwhelming work demands (F = 3.19, p < .05) and managing conflict between work and personal/family needs (F = 6.35, p < .001). The means were highest among those holding positions from dean to vice president. According the Student-Newman-Keuls test results, the difference was statistically significant on managing conflict between work and personal/family needs. Resources and Professional Challenges. T test results documented the value of a mentor and a strong professional network in coping with professional challenges. Those who had a mentor were significantly less likely to report that the job was taking too much out of them (t = 2.25, p < .05) or that they were having difficulty in obtaining necessary resources for their work (t = 2.05, p < .05). Also, those who were part of a strong professional network were significantly less likely to report the following concerns about their present job: little chance for advancement (t = 1.97, p < .05); limited options for career development (t = 3.14, p < .01); minimal opportunity to use one's intelligence (t = 3.45, p < .001); inadequate opportunity to show creativity (t = 2.46, p < .05); and job monotony or lack of variety (t = 3.45, p < .001). Institutional researchers who report they are part of a strong professional network also report they are significantly less likely to experience pressure to make professional or ethical compromises, including to perform work with inadequate training (t = 4.40, p < .001); to present a false, less competent image ( t = 3.69, p < .001); to sacrifice quality (t = 2.16, p < .05); or to treat others unfairly (t = 2.20, p < .05). Further analysis with the challenge scales identified a statistically significant relationship between having a mentor and coping with limited career opportunity (F = 4.13, p < .01). This challenge was highest among those who did not have a mentor and lowest among those who had both a male and female mentor. Those who reported they were part of a strong professional network also reported significantly less challenge in dealing with threats to quality standards (F = 2.52, p < .05). Path Analysis Technique. Path analysis was employed to answer the following questions. To what extent do level of position, challenges and resources predict job quality? How do job quality, level of position, challenges and resources predict involvement in policy? Technically, the path-analytic technique assessed the direct and indirect effects of a set of exogenous variables - level of position, challenges and resources - on an endogenous variable - job quality and the effects of all of the exogenous variables and job quality on engagement in policy. 45 Figure 4 shows the results visually in a path diagram. The lines indicate the pathways that had beta-weights greater than .10, with the specific beta-weight indicated for each pathway. Each path coefficient is the beta-weight for the precursor variable on the endogenous variable. In an attempt to control for practical significance, when the standardized regression coefficient (beta-weight) for a particular path was less than .10 (Hackett, 1985), the path was dropped. Figure 4 Path Diagram for Predicting Engagement in Policy Professional Network Threats to Quality Conflict – Work/Family Limited Opportunity Mentor Level of Position .12 -.31 .12 Job Quality R2 = .47 .49 -.18 .22 Policy Engagement R2 = .43 .34 .26 The calculations of the direct and indirect paths are presented in Table 2. This causal analysis decomposes the correlation between two variables into three components: direct, indirect, and spurious. The direct and indirect components are summed to the total true causal effects whereas the spurious component is due to unexplained factors and is obtained by subtracting the total effect from the bivariate correlation coefficient. The direct effects are the effects that come directly from the precursor variable in the 46 dependent variable, without being mediated by other variables in the model. The indirect effects are the effects of the precursor variable as operating through or mediated by other variables on the dependent variable. For example, the zero order correlation between level of position and engagement in policy is .49. Path analysis documents that the direct and indirect effects respectively are .26 and .17. The total effect is .43 and the spurious effect is .49 - .43, or .06. Table 2 Path Analysis Results: Breakdown of Direct and Indirect Effects on Engagement in Policy Effects Path Bivariate r Professional Network Threats to Quality Conflict between Work and Family Limited Opportunity Mentor Level of Position Job Quality .29 -.23 .25 -.26 .16 .49 .61 Indirect Direct Total Spurious .06 -.15 .06 -.09 .11 .17 - .26 .49 .06 -.15 .06 -.09 .11 .43 .49 Correlations. As illustrated in Table 2, statistically significant correlations were found between engagement in policy and each of the exogenous variables: level of position ( r = .49, p < .001), followed by having a strong professional network ( r = .29, p < .001), and managing conflict between work and family ( r = .25, p < .001). Having a mentor is also positively related to engagement in policy ( r = .16, p < .05). In contrast, two of the professional challenges - coping with limited opportunity (r = -.26, p < .001) and dealing with threats to quality standards ( r = -.23, p < .001) are negatively correlated with policy engagement. Job quality has the strongest positive correlation with engagement in policy (r=.61, p < .001). Path Analysis Results. As reflected in the path coefficients, four of the six exogenous variables have a positive, direct effect on job quality. In order of magnitude, these variables are: level of position (.34), mentor (.22), professional network (.12) and conflict between work and family (.12). In contrast, two variables: dealing with threats to quality standards ( - .31) and coping with limited opportunity ( - .18) have negative effects on job quality. As indicated by the R 2 of .47, these variables explain 47 percent of the variance in job quality. All of the exogenous variables also have indirect effects, through job quality, on engagement in policy. These indirect effects range from - .15 for dealing with threats to 47 .23 -.08 .19 -.17 .05 .06 .12 quality to +. 17 for level of position. Further, level of position is the only exogenous variable that has a direct effect (.26) on engagement in policy. The R 2 of .43 demonstrates that the direct effects of job quality and the direct and indirect effects of the exogenous variables explain 43 percent of the variance in engagement in policy. Discussion Results from this research confirm findings from previous studies that addressed challenges institutional researchers encounter in their career. In this study, approximately two-fifths identified having too much to do as very much of a concern in their current job. Close to one-quarter also reported that producing quality work within time constraints was very much of a problem in their career. In an earlier study, Huntington and Clagett (l991) also reported excessive workload as one of the problems most frequently experienced by institutional researchers. Recognition for the work accomplished is also a problem for a substantial number of institutional researchers. In this study, 15 percent reported receiving credit for work as very much of a challenge and 14 percent reported they frequently or very often felt they had to allow others to take credit for their work. These results involve an ethical issue regarding attributing appropriate credit to the person who accomplishes the work. In a previous study, Sanford (l983) identified little extrinsic recognition for the work as a primary source of stress for institutional researchers. This study documents clearly that those who have a mentor or are part of a strong professional network have higher job quality and are significantly less likely to experience many potential sources of stress on their job: such as, minimal opportunity to use one's intelligence, inadequate opportunity to show creativity, job monotony, or little chance for advancement. These positive effects of mentors and professional networks highlight the value of professional relationships. In this sense, the study supports the recommendation of Hurst, Matier and Sidle (l998) that institutional researchers promote a team approach as a way of enhancing effectiveness. Recommendations As noted previously, the goal of this research has been not only to identify and understand the challenges institutional researchers face but also to propose creative strategies to meet these challenges and thus enhance institutional researchers' professional status and effectiveness. Based on the study findings, the following recommendations are offered to achieve this goal. • The institutional research profession should promote strong mentoring relationships. Professional associations should provide the structures for developing mentoring relationships. Institutional research directors and university administrators should provide resources and create opportunities to support mentoring relationships for institutional researchers, particularly those who are new to the profession. 48 • Institutional researchers should actively participate in professional associations and seek out colleagues for advice and support on a continuing basis. Regional and national associations should place a high priority on using the organizations to strengthen professional networks for new and experienced researchers. In additional to annual meetings, the associations should seek new ways to support networks during the year. • The institutional research profession should advocate that institutional researchers' jobs be structured with a high level of independence, intellectual vigor and professional integrity. Director's positions should be characterized by flexibility in establishing work priorities; authority in setting research agenda; freedom in deciding how work is accomplished and authority required to get the work done. All positions, especially research associate and analyst positions, should offer opportunities for intellectual stimulation, creativity and career advancement. 49 Appendix A Questionnaire Items Comprising the Professional Challenges Scale Experiencing Overwhelming Demands (r=.89) * When you think about your current job, how much, if at all, are the following items a concern for you? a. The job is taking too much out of you b. Having too much to do c. Stressful demands of the job Coping with Limited Opportunity (r=.89) * When you think of your current job, how much, if at all, are the following items a concern for you? a. Having little chance for advancement b. Lack of recognition c. Limited options for career development d. Minimal opportunity to use your intelligence e. Inadequate opportunity to show creativity f. The job's monotony or lack of variety Managing Conflict between Work and Family (r=.83) ** Do you feel you have had to make any of the following compromises to sustain your career? a. Work excessive overtime b. Neglect family responsibilities c. Neglect personal needs Dealing with Threats to Quality Standards (r=.80) * Do you feel you have had to make any of the following compromises to sustain your career? a. Lower your standards b. Sacrifice quality * Response Scale: 1 'Not at All' to 5 'Very Much' ** Response Scale: 1 'Never' to 5 'Very Often' 50 Appendix A Questionnaire Items Comprising the Professional Challenges Scale Engagement in Policy (a=.89) *** Indicate the extent to which the following statements describe your role or the use of your work at your institution. a. Initiate discussions on program planning and policy b. Collaborate in program development c. Consult on impending policy changes d. Serve on planning and policy committees e. Present your work at executive level meetings f. Conduct follow-up studies on the impact of work g. Work is disseminated at the VP and Presidential Level h. Work is used in executive decision-making i. Work effects program and policy changes j. Work includes policy recommendations Job Quality (A=.86) *** To what extent are the following items a rewarding part of your job? a. Freedom to decide how to do your work b. Being able to make decisions on your own c. Authority you need to get the job done d. Being able to work on your own e. Authority to set your own research agenda f. Flexibility to establish your work priorities g. Freedom to decide how your work will be shared h. Freedom to accept or reject superior's suggestions i. Independent authority to hire persons of your choice j. Authority to spend department budget as you wish k. Supervisory support for professional development l. Financial support for professional development *** Response Scale: 1 'Almost Never' to 5 'Very Frequently' 51 References Ubasta, J.L. (May l976). Conflicting pressures that impinge upon the operational effectiveness of institutional researchers: Challenges to the practitioner. Paper presented at the 16th annual forum of the Association for Institutional Research, Los Angeles, California. ( ED 126837 ). Hackett, G. (1985). Role of mathematics self-efficacy in the choice of math-related majors of college women and men: A path analysis. Journal of Counseling Psychology, 32, 47-56. Hurst, P. J., Matier, M. W., and Sidle, C.C. (l998). Fostering teamwork and teams from the institutional research office. In J.F. Volkwein (Series Ed.) & S.H. Frost (Vol. Ed.), Using teams in higher education: Cultural foundations for productive change. New Directions for Institutional Research 100,17 - 25, San Francisco, Jossey-Bass. Huntington, R.B. and Clagett, C.A. (November 1991). Increasing institutional research effectiveness and productivity: Findings from a national survey. Paper presented at the 18th annual conference of the Northeast Association for Institutional, Cambridge, Massachusetts. (ED 346779). Matier, M. W., Sidle, C.C., and Hurst, P. J. ( l995). Institutional researchers' roles in the 21st century. In P.T. Terenzini (Series Ed.) & T.R. Sanford (Vol. Ed.), Preparing for the information needs of the twenty-first century. New Directions for Institutional Research 85, 75-84. San Francisco, Jossey-Bass. Sanford, T.R. (May l983). Coping strategies for job stress among institutional researchers. Paper presented at the 23rd annual forum of the Association for Institutional Research, Toronto, Ontario. ( ED 232583 ). Storrar, S. J. (May 1981). Perceptions of organizational and political environments: Results from a national survey of institutional research/planning officers at large public universities. Paper presented at the 21st annual forum of the Association for Institutional Research, Minneapolis, Minnesota. ( ED 205094 ). 52 RESPONSIBILITIES AND STAFFING OF INSTITUTIONAL RESEARCH OFFICES AT JESUIT AND PROMINENT OTHER CATHOLIC UNIVERSITIES Donald A. Gillespie Director, Office of Institutional Research Fordham University The impetus for this research was an urgent need for comparative data on the typical staffing and responsibilities of institutional research (IR) offices. The author wished to obtain data that would enable officials at his university to judge whether the size of the staff of the IR office was commensurate with its objectives. To make such an assessment, it was necessary to know the responsibilities that would accompany given staffing levels at other colleges. Furthermore, because Fordham has both a Catholic and Jesuit identity, it was desirable to obtain data from schools with similar traditions. Several regional studies have examined the size and responsibilities of IR offices. They have generally found that the enrollment of a school influences the size of the IR department, which in turn affects the complexity and sophistication of the analytical tasks that the IR office conducts (Delaney, 1997; Volkwein, 1990). There appears to be a common core of activities that most institutional research offices perform (Muffo, 1999; Volkwein). One must be cautious about generalizing from these studies. None reported separate statistics for Catholic or Jesuit institutions. Those with samples drawn from institutional research associations may not be representative of colleges that do not belong to such organizations. In a comparison of North American regional studies, Muffo (1999) observed that regions differ in the type, control, and size of schools, as well as the requirements of accrediting organizations. He noted too that the dominance of enrollment research in IR offices in the northeast and New England might reflect the efforts of colleges to cope with slow enrollment growth in these regions. This exploratory study has three purposes: (1) to obtain data on staffing and responsibilities for institutional research at Jesuit colleges and at other Catholic universities that are large or have significant doctoral programs, (2) to present data that would enable administrators at peer institutions to assess the adequacy of staffing for institutional research functions, and (3) to explore a methodology for determining the staff necessary to accomplish typical IR tasks. Method Participants. The target population for this survey included all Jesuit colleges and universities, as well as Catholic institutions that were large or that had significant doctoral programs. The investigator obtained responses from 23 of the 28 Jesuit colleges and universities in the U.S., from 10 of the 12 largest Catholic schools (DePaul University Enrollment Management Research, 1998), and from 10 of the 11 Catholic 53 universities that participated in the 1995 rating of graduate programs by the National Research Council (Webster & Skinner, 1996). These categories overlap. Of 36 institutions in the sampling frame, 31 (or 86 percent) participated in the study. Procedure. The investigator conducted a telephone survey of officials responsible for institutional research during the summer and early fall of 2000. He promised not to report information that might be identified with a respondent's school. Directors or coordinators of institutional research provided data for the 1999-2000 academic year. They reported total headcount enrollment of the institution in fall 1999. They provided also headcount and full-time equivalent statistics for six categories of personnel: full- and part-time professionals, full- and part-time support and clerical workers, graduate assistants, and other student employees. Then, the respondents indicated whether their offices had performed each task on a checklist of IR projects. The investigator also gave participants an opportunity to identify responsibilities that were not included in the checklist. After conducting initial interviews, the researcher expanded the list. He obtained information from 31 schools on the activities in the initial questionnaire and from 19 colleges on the added items in the second phase of the survey. Some of the schools examined did not have institutional research offices. In such cases, the investigator obtained information from the administrator or faculty member who had the most responsibility for institutional research functions. The following report provides information on 29 schools. The investigator did not include the two largest institutions that responded to the survey because readers probably would be able to identify the schools. Table 1 displays the number of schools in the sample according to headcount enrollment and total full-time-equivalent personnel. The number of participants in the second phase of the survey is given only by enrollment because no results for the second stage are reported by size of staff. Results The mean headcount enrollment of the institutions participating in the survey was 7,357 (SD = 3,243). The average size of IR offices was 2.9 full-time-equivalent (FTE) persons (SD = 1.8). The data on personnel were combined into three categories: full-time professionals, other employees (part-time professionals and full- and part-time clerical and support staff), and student workers (graduate assistants and other students). Figure 1 shows that the average size of a staff increases with enrollment and that student workers make up only a small proportion of the FTE staff. A few IR directors commented that it was not efficient to use students because they are temporary and parttime and require extensive training. 54 Full-Tim e-Equivalent Staff Table 1 Number of Schools in Sample by IR Staff Size and Enrollment Headcount Enrollment Sample and FTE Staff < 5,000 5,000 – 9,999 10,000 – 14,999 Total Full 5 9 4 18 <3 2 4 2 8 3 to 5.99 1 2 3 >=6 7 14 8 29 Total Phase 2 6 9 4 19 Total Student Staff (FTE) 4 3 O ther Staff (FTE) 2 1 Full-Tim e ProfessionalStaff (FTE) 0 < 5000 5,000- 10-000- Total 9,999 14,999 Enrollm ent (H eadcount) Figure 1. Full-Tim e-Equivalent IR Staff at C atholic C olleges and U niversities by Enrollm ent (H eadcount), 1999-2000 (N = 29). An alpha level of .05 was used for all statistical tests. To gain greater insight into the relation of office size to enrollment and institutional complexity, the investigator completed a regression of FTE staff against headcount enrollment and the Carnegie classification as revised in 1994 ("Carnegie Foundation's classification," 1994). To correct for heteroscedasticity, a generalized least squares (weighted) regression model was used. There was no significant relationship between Carnegie classification and FTE personnel. The full and reduced models appear in table 2. See figure 2 for a plot of predicted FTE staff against headcount enrollment as developed in the reduced model. 55 Table 2 Summary of Simultaneous Weighted Regression Analysis for Variables Predicting FTE Staff in Institutional Research Functions (N = 29) Full Model Reduced Model B SE B B SE B 1.09951 0.65321 0.52187 0.53521 Constant 0.00045 * 0.00013 0.00033 * 0.00010 Headcount Enrollment 0.20916 Carnegie Classification -0.30955 2 0.29 0.26 Adj. R 0.86 0.88 SE 6.74 * 10.82 * F Note. * p < .05. Full-Time-Equival Staff 6 5 4 3 Predicted FTE IR Staff 2 1 0 0 5000 10000 15000 HeadcountEnrollm ent Figure 2. Predicted FTE Staffin IR O ffices (1999-2000) To facilitate a comparison with Volkwein’s (1990) results, FTE staff size was correlated with (1) headcount enrollment and (2) Carnegie classification. Significant correlations were obtained with headcount enrollment, r(29) = .46 and Carnegie classification, r(29) = .45. Volkwein obtained .73 and .60, respectively. Using Fisher's Z transformation, one can test the difference between correlations from two independent samples. The difference between the (a) the correlation between headcount and FTE staff obtained here and (b) that of Volkwein is significant, χ2(1) = 68.56. The beta weight for enrollment regressed against FTE staff in the reduced regression model in this study was .54. What tasks do IR offices perform? Table 3 shows the percentage of schools that report engaging in an activity by size of school. For tasks listed in the full sample, the table provides also percentages by full-time-equivalent staff size. Thirteen of the tasks listed by Volkwein (1990) had labels similar to the titles used in this study. The last 56 column shows that percentage of institutions that reported completing similarly described tasks in Volkwein’s report (1990). Table 3 Percentage of IR Offices Completing Activities by Size of Institution and IR Staff Headcount Enrollment 5,000 - 10,000 VolkActivity and FTE Staff <5000 9,999 14,999 Total weina Responding to Surveys IPEDS Reports (F) 80% 100% 75% 89% < 3 FTE's 100% 75% 100% 88% 3 - 5.9 FTE's 100% 100% 100% > = 6 FTE's 86% 93% 88% 90% 85% Total Major Surveys, e.g., US News, NCAA (F) 100% 100% 75% 94% < 3 FTE's 100% 75% 100% 88% 3 - 5.9 FTE's 100% 100% 100% > = 6 FTE's 100% 93% 88% 93% Total Minor Surveys, e.g., College Guides (F) 60% 89% 25% 67% < 3 FTE's 100% 75% 100% 88% 3 - 5.9 FTE's 100% 100% 100% > = 6 FTE's 71% 86% 63% 76% 81% Total Total Major and Minor Surveys (F) 100% 100% 75% 94% < 3 FTE's 100% 75% 100% 88% 3 - 5.9 FTE's 100% 100% 100% > = 6 FTE's 100% 93% 88% 93% Total Fact Book (F) 60% 89% 50% 72% < 3 FTE's 50% 50% 100% 62% 3 - 5.9 FTE's 100% 100% 67% > = 6 FTE's 57% 71% 75% 69% 77% Total Retention Analysis (F) 100% 89% 100% 94% < 3 FTE's 50% 100% 100% 88% 3 - 5.9 FTE's 0% 100% 67% > = 6 FTE's 86% 86% 100% 90% 93% Total 57 Table 3 (continued) Activity and FTE Staff Financial Aid and Tuition Discount Analysis (F) < 3 FTE's 3 - 5.9 FTE's > = 6 FTE's Total Enrollment Management Admissions: Performance Monitoring (S) Admissions: Operational Support (S) Admissions: Marketing Research or Policy Analysis (S) Admissions: Research or Support (F) < 3 FTE's 3 - 5.9 FTE's > = 6 FTE's Total Enrollment Projections (S) Information System Policy Development (F) < 3 FTE's 3 - 5.9 FTE's > = 6 FTE's Total Assessment Surveys--Satisfaction; Cognitive, Personal, Career Development; Alumni (F) < 3 FTE's 3 - 5.9 FTE's > = 6 FTE's Total Academic Program Review (S) Participate in Regional Accreditation SelfStudies (F) < 3 FTE's 3 - 5.9 FTE's > = 6 FTE's Total 58 Headcount Enrollment 5,000 - 10,000 Volk<5000 9,999 14,999 Total weina 60% 100% 71% 44% 75% 100% 57% 100% 50% 100% 88% 61% 75% 100% 69% 100% 0% 33% 11% 75% 50% 63% 16% 17% 56% 75% 47% 0% 50% 22% 75% 100% 43% 78% 75% 50% 50% 63% 100% 28% 62% 67% 41% 79% 100% 75% 100% 93% 75% 100% 50% 75% 83% 75% 67% 79% 60% 78% 100% 100% 100% 71% 86% 83% 22% 50% 100% 100% 75% 50% 67% 100% 100% 79% 47% 33% 80% 100% 100% 75% 100% 86% 93% 75% 100% 100% 88% 89% 88% 100% 90%* 48% 14% 67% 60% 50% 57% 80% Table 3 (continued) Headcount Enrollment 5,000 - 10,000 Volk<5000 9,999 14,999 Total weina Activity and FTE Staff Faculty Analyses Faculty Load Analysis (F) 80% 78% 25% 67% < 3 FTE's 50% 50% 100% 62% 3 - 5.9 FTE's 100% 100% 100% > = 6 FTE's 71% 71% 62% 69% 76% Total 33% 11% 0% 16% Faculty Flow Analysis (S) 67% 56% 25% 53% 64% Faculty Compensation (S) Participation in Strategic Planning (F) 80% 89% 75% 83% < 3 FTE's 100% 75% 100% 88% 3 - 5.9 FTE's 100% 50% 67% > = 6 FTE's 86% 86% 75% 83% Total 100% 67% 50% 74% Peer Analyses—Benchmarking Studies (S) Environmental Scanning (F) 40% 22% 0% 22% < 3 FTE's 50% 0% 50% 25% 3 - 5.9 FTE's 100% 50% 67% > = 6 FTE's 43% 21% 25% 28%* 67% Total 33% 44% 50% 42% 44% Cost Analyses (S) Note. F = full sample (N= 29); S = second phase or small subsample (N = 19). a Volkwein (1990). *Difference between totals in this study and Volkwein (1990) is significant, p < .05. The percentages in the total column of table 3 indicate that more than 90 percent of IR offices participate in regional accreditation self-studies, complete IPEDS reports, respond to surveys, and conduct retention analyses. A minority of IR offices provide operational support or marketing research for admissions programs, participate in academic program review, analyze faculty flow, do environmental scanning, or perform cost analyses. In general, the proportions of Jesuit and Catholic colleges engaging in activities were not different from those listed in Volkwein’s (1990) article. However, two-tailed tests of differences in proportions were significant for two activities. Namely, the schools in this survey were less likely than those in Volkwein’s study to engage in environmental scanning, but more likely to participate in regional accreditation studies. Because of small cell sizes, the investigator did not conduct statistical tests of association between institutional size, staff size, and performance of IR tasks. 59 Nevertheless, several activities appear to be related to size of college. The IR offices at larger institutions are more likely to provide operational or research support to admissions offices, to complete enrollment projections, and to conduct cost analyses. Institutional researchers at small colleges are more likely to complete benchmarking studies and faculty analyses than their counterparts at large schools. The data suggest several activities that are related to staff size. Small IR offices tend not to complete minor surveys for college guides and the like. This is particularly true at large schools. Rather, the offices of public affairs and admissions have responsibilities for such surveys. IR offices with small staffs are less likely than large offices to perform financial aid and tuition discount analyses, to conduct assessment surveys, and to do environmental scanning. The completion of fact books appears to be related to both school and IR staff size. The proportion of offices compiling fact books increases with size of college and size of IR staff. Furthermore, small IR offices in large colleges are among the least likely IR departments to complete fact books. Respondents to the survey identified general responsibilities that were not included in the structured questionnaire. Table 4 lists the items mentioned. Except for policy and management studies, very few institutions performed any one of the activities. Furthermore, most of the projects were conducted in the largest IR offices in the survey. Discussion The average enrollment of institutions that participated in this investigation is in the middle range of the regional surveys summarized by Muffo (1999), as is the average size of institutional research staffs. In this study, the correlation between staff size and enrollment is much smaller than what Volkwein (1990) obtained (.46 vs. .73). However, the correlation obtained in the reduced (bivariate) regression model in this study was .54. These results suggest that enrollment has a moderate influence on the size of IR staffs in Catholic and Jesuit colleges, but to a lesser extent than is typical of institutions in the northeast. This report has used Carnegie classification as a measure of the breadth of degree programs and institutional complexity. The correlation between this measure and FTE staff and what Volkwein found (.45 vs. .60) was not significant. It would appear that among Catholic and Jesuit institutions, factors apart from enrollment, extent of degree programs, and institutional complexity determine the size of IR offices. Most of the institutions in this survey completed a core set of tasks, which included IPEDS reports, surveys from outside organizations, and retention analyses. With three exceptions, the IR tasks performed present a profile like that of institutions in North East Association for Institutional Research (NEAIR; Volkwein, 1990). Unlike NEAIR members, the IR officers at Catholic and Jesuit schools were less likely to do environmental scanning and more likely to participate in regional accreditation studies. Some data obtained in this survey suggest that the size of a college might be related to IR responsibilities. A greater percentage of large IR offices at large universities compiled 60 fact books than IR departments at small colleges. It may be that large institutions require more formal means of disseminating information than small schools. Table 4 Responsibilities Not Included in Structured Questions Policy and management studies (ad hoc studies) Data and Information Management Cleaning and auditing data for all university academic systems Data warehouse development and operation Develop programs to extract data from the university information system Implementation of new university-wide information systems Academic Management Faculty contracts Course flow Classroom utilization management Analysis of productivity of departments and schools Editing of catalog Academic program and faculty development Organizational Development Departmental consultation and team building Coordinate Quality Improvement Program Coordinate all training and professional development programs of the university Facilitate discussions on academic issues (surveys and focus groups) Develop an analytic culture in the university Information Collection and Dissemination Community service report Faculty publications list Additional Assessment Activities Teacher and course evaluations Staff satisfaction surveys Student, faculty, and staff climate surveys Survey Design Consultation Financial and Management Analysis Tuition and fee policies Forecasting revenue from tuition and fees Budget models Work force analysis Some activities are uncharacteristic of IR offices with small staffs. These include compiling fact books, completing small surveys, analyzing financial aid and tuition discount programs, conducting assessment surveys, and doing environmental scanning. Many of these activities are either labor intensive or highly sophisticated. Perhaps core 61 IR activities take up most of the time of a small staff. A relatively large IR staff may be necessary to conduct labor intensive or highly specialized work. After controlling for enrollment, colleges in this study show considerable variation in IR staff size. Nevertheless, the models in this report enable IR officers at Jesuit and Catholic colleges to calculate an expected staff size based on enrollment. Furthermore, the data in tables 3 and 4 can be used to determine if a school is different from its peers because the IR office devotes resources to atypical activities or because it fails to complete common tasks. What are the limitations of this study and what are implications for future research? The sample size in this survey is small. However, it is important to recall that the respondents constituted 86 percent of the target population. Any extension of this work to a fuller range of colleges and universities would enable an investigator to increase sample size. In addition, any subsequent study should consider indicators of complexity other than Carnegie classification. These might include the number of schools and campuses within a university. It would be beneficial to emulate Volkwein’s (1990) analysis by obtaining data (1) on the organizational location of IR offices and (2) on the extent to which tasks are shared with other offices. Finally, to develop standards for what is required to complete IR tasks, it might be best to focus on the activity as the unit of analysis and to collect information on the skill level and amount of time required to complete major IR projects (Personal communication, Thomas J. Dimieri, November 6, 2000). References Carnegie Foundation's classification of 3,600 institutions of higher education. (1994, April 6). The Chronicle of Higher Education, pp. A18–A25. Delaney, A. M. (1997). The role of institutional research in higher education: Enabling researchers to meet new challenges. Research in Higher Education, 38, 1-16. DePaul University Enrollment Management Research. (1998, December). Enrollment at largest U.S. Catholic universities. (Unpublished report.) Chicago: Author. Muffo, J. A. (1999). A comparison of findings from regional studies of institutional research offices. In J. F. Volkwein (Ed.), New Directions for Institutional Research: No. 104, What is institutional research all about? A critical and comprehensive assessment of the profession (pp. 51-59). San Francisco: Jossey-Bass, Inc. Volkwein, J. F. (1990). The diversity of institutional research structures and tasks. In J. B. Presley (Ed.), New Directions for Institutional Research: No. 66, Organizing effective institutional research offices (pp. 7-26). San Francisco: Jossey-Bass, Inc. 62 Webster, D. S., & Skinner, T. (1996, May/June). Rating PhD programs: What the NRC report says…and doesn't say. Change, 28, 22-44. 63 64 NEW TECHNOLOGY AND STUDENT INTERACTION WITH THE INSTITUTION Gordon J. Hewitt Assistant Director, Institutional Research Tufts University Dawn Geronimo Terkla Executive Director, Institutional Research Tufts University Introduction Higher education institutions are facing a technological revolution in almost every aspect of operation. Universities are rushing to increase offerings of on-line courses, online course registration, automated advising systems, and to provide infrastructures to accommodate the growing computer and telecommunication needs of its faculty, staff and students. What makes managing this revolution even more difficult is the fact that the people leading the revolution on the user-end are high school and college-aged students, who bring their sophisticated technological habits to campus. Frand (2000) notes that in 1998, for the first time since television was introduced to the public, the number of hours young people spent watching television decreased. This decrease was due to the increased time spent on the Internet. The Web is now the prime information source for young people. However, unlike television, the Internet is an interactive medium. Young people are now communicating more than ever, whether it be through email, instant communication, or bulletin boards. In fact, it has recently been reported that the average connected American sends at least one email a day, spends on average 8.8 hours per week online, and visits an average of 9 sites (Milliron & Miles, 2000). One would expect that these averages would be exceed for high school and college-age students In 1996, it was estimated that approximately 90 percent of college and university students in North America have ready Internet access, compared to less than one-tenth of the general population (Chidley, 1996). Four years later the landscape is dramatically different. According to a October 2000 report by the National Telecommunications and Information Administration, 51 percent of all US households had computers and of those households 80 percent have internet access. While a 2000 estimate for the percentage of college and university students with ready access to the Internet is not available, it is quite likely that it is has increased somewhat since 1996. Given that colleges and universities are now admitting students of the “NET generation”, it is imperative that institutions understand how prospective students as well as enrolled students interact with various members of the campus community. Current estimates are that 56.8 percent of individuals age 18-24 and 53.4 percent of youth age 9-17 use the internet (NTIA, 2000). One would expect that these numbers are likely to increase over the next five year. 65 Objective The primary objective of this paper is to examine how prospective students as well as current undergraduates are using electronic communication to interact with various campus constituencies at a Research I university. Data from three distinct surveys were used to understand this phenomenon: 1) Undergraduate Non-Enrolled Survey (accepted but did not enroll), 2) Undergraduate New Student Survey and 3) Graduating Senior Exit Survey. Literature There has been an abundance of research done on the effect electronic communication has on learning and socialization in college (Green, 1998; Duin & Archee, 1996; Ritter, 2000; Windschitl & Lesehm-Ackerman, 1997; Zagorsky, 1997). There is not, however, a body of literature on how students interact electronically with university constituents outside of the classroom. Research that informs this study are Selwyn’s (1998) study of 16-19 year olds’ domestic use of computers and the relationship with use of information technology in school or college, Fishman’s (1999) study on predicting students’ success with computer-mediated communication, and Piirto’s (1998) work on how college students use and view e-mail in regards to specific content communication. Data Sources and Methodology Accepted Applicant Surveys Undergraduate accepted applicants (non-enrolled students and new students) for the Fall semester were mailed surveys, with business-reply return envelopes, in May, 2000. Of the 1,183 entering first year students, 798 returned New Student surveys, resulting in a 67.5 percent response rate. Of the 2197 students, who were accepted but did not enroll, 795 returned Non-Enrolled Student surveys. Thus, the response rate for nonmatriculating students is 36.2 percent. The admissions survey instruments, which have been administered over the past fifteen years, were augmented this year to include questions about electronic communications. The admissions office staff and dean were quite instrumental in providing “content” guidance. Survey items were in the form of categorical responses, Likert-type scales, and open-ended comments. The array of questions posed to incoming students and non-enrolled students were very comparable. With regard to the use of computers and the Internet, accepted applicants were queried to ascertain 1) software and communication applications used and extent of that usage, 2) Internet/computer sources used to gather information about schools during the application process, 3) the characteristics of the computer hardware utilized to connect to the Internet, 4) reasons that prevented individuals from submitting their applications electronically, and 5) the 66 importance of having a specific array of computing/electronic capabilities available to them at the institution. Senior Survey Data from the 2000 Senior Survey was used to determine the extent of e-mail and other electronic communication usage among currently enrolled students. Graduating seniors were asked to complete the survey during the week prior to graduation. Historically the response rate to this data gathering has been quite high since graduating senior must complete the survey in order to participate in commencement ceremonies. Of the 1,183 graduating seniors, 1,131 submitted completed surveys for a response rate of 96 percent. In addition to the standard items that have been used for the past ten years, new items were developed with the purpose of gaining a better understanding of students’ electronic communication usage. Survey items were in the form of categorical responses, Likerttype scales, and open-ended comments. Graduating seniors were asked a variety of questions regarding with whom and how often they communicated via e-mail. In addition, they were asked the frequency with which they participated in academic listserves, electronic chat room and used the internet for research or classroom assignments. Analysis Data from the surveys were analyzed in SPSS by running frequencies on the relevant variable items. Open-ended comments related to relevant items were also organized and analyzed. Analysis of open-ended items consisted of coding and categorizing responses. Findings Accepted Applicants Generally, accepted applicants used the Web, e-mail, and instant communication – also known as interactive real-time chat (IRC)—extensively (see Table 1). Over 78 percent of the respondents used e-mail at least once per day, and 70 percent accessed the Web at least once per day. Surprisingly, 59 percent of the respondents stated that they used IRC at least once per day. And, while the use of online shopping on a regular basis was minimal, over half (56.2%) had shopped online at some time. 67 TABLE 1 GENERAL USE OF ELECTRONIC COMMUNICATION AND THE INTERNET Several times per day Once per day Once per week Once per month Few times per year Never 1. Web Access 33.5% 36.6% 19.5% 4.5% 1.7% 4.2% 2. E-Mail 35.0% 43.7% 15.9% 2.1% 2.1% 1.2% 3. Instant Communication 32.8% 26.1% 14.4% 5.2% 5.4% 16.2% 4. On-Line Shopping 0.1% 0.3% 5.0% 16.7% 34.1% 43.8% Accepted applicants were asked to identify the methods they used to make initial contact with the institution. As students began collecting information in order to determine where they would apply, they most frequently accessed college web pages (58.4%). Visiting campus was the second most popular method of collecting initial information (33.9%), followed by email contact with admissions offices (24.3%) and calling admissions offices (23.8%) (see Table 2). TABLE 2 FIRST CONTACT WITH COLLEGES TO COLLECT INFORMATION % 1. Accessed College Web Pages 58.4% 2. Called Admissions Offices 23.8% 3. Wrote to Admissions Offices 11.8% 4. Emailed Admissions Offices 24.3% 5. Visited Campus 33.9% 6. Other 8.2% 68 In addition, accepted applicants were asked to identify specific aspects of the Web or Internet that were used to help decide where to apply. The most frequently cited resource was individual college and university WWW pages, as stated by 82% of the respondents. Almost 29 percent of the students also stated that they relied on the U.S. News & World Report WWW site, and almost 28 percent used general college information sites (see Table 3). Only 14 percent of the accepted applicants indicated that they did not use web or Internet resources to aid in their application decisions. TABLE 3 ASPECTS OF WEB OR INTERNET USED TO HELP DECIDE WHERE TO APPLY % 1. Specific College/University WWW Pages 82.0% 2. General College Information Sites 27.9% 3. US News & World Report WWW site 28.7% 4. Other WWW site 4.5% 5. Did not use the WWW or Internet in my application decision 13.8% In regards to learning specifically about Tufts, over 77 percent of accepted applicants used the admissions web site. However, only 31 percent of the accepted applicants indicated that they had e-mail contact with the Tufts admissions office and just over 21 percent took the Virtual Tour on the Tufts WWW page. Like many institutions, Tufts has a web-based application process, however, only 18.2 % of the accepted applicants utilized it to submit their applications (see Table 4). TABLE 4 USE ELECTRONIC COMMUNICATION AND THE INTERNET TO COMMUNICATE WITH, AND TO LEARN ABOUT TUFTS YES NO 1. Accessed Admissions Web Site 77.1% 22.9% 2. Email Contact with Admissions 30.9% 69.9% 21.1% 78.9% 18.2% 81.8% 3. Took Virtual Tour 4. Submitted an Electronic Application 69 Those applicants who did not submit an application electronically, through the web, were asked why. Over two-thirds of the respondents (68.5%) stated that paper applications seemed more reliable, and over 36 percent felt that an electronic application was too impersonal (see Table 5). A significant number of students were also concerned that they did not know how the application would look when it arrived at the Admissions Office (23.1%). TABLE 5 WHY AN ELECTRONIC APPLICATION WAS NOT SUBMITTED 1. Did not have the computer/WWW skills 2. Was not aware that electronic submission was an option 3. Started applying electronically, but encountered technical problems 4. Did not have access to a personal computer 5. My computer couldn't handle the task 6. Too impersonal 7. Paper seemed more reliable 8. Didn't know how application would look when it arrived at Tufts 9. Thought I might be at a competitive disadvantage 10. Parents/friends preferred that I submit applications on paper 11. My high school required me to submit an application on paper 12. Don't like using credit card via the WWW 13. Other concern % 3.4% 8.2% 6.4% 0.7% 4.5% 36.1% 68.5% 23.1% 12.4% 20.2% 7.4% 16.2% 11.0% Accepted applicants were also asked to identify the computer and Internet capabilities they felt were important aspects of a college campus environment. Internet/WWW access and e-mail were, by a wide margin, considered to be the most important capabilities. Over 97 percent felt that Internet/WWW access was either essential or very important, and over 94 percent felt that e-mail was either essential or very important. Relative to the other capabilities, students did not feel that online study/discussion groups were an important capability. Only 27 percent of the respondents indicated that this function was essential or very important (see Table 6). 70 TABLE 6 IMPORTANCE OF INSTITUTIONAL COMPUTER AND INTERNET CAPABILITIES Essential Very Important Somewhat Important Not At All Important 1. E-mail 78.5% 16.3% 4.2% 1.0% 2. Internet/WWW access 85.1% 12.0% 2.8% 0.2% 3. Library access from the WWW 44.7% 37.7% 15.7% 1.9% 4. Electronic class registration 15.7% 36.7% 37.0% 10.5% 5. Online study/discussion groups 7.9% 19.2% 46.7% 26.2% 6. Online course descriptions 25.7% 41.6% 26.7% 5.9% 7. Online information access (grades, etc.) 26.7% 40.1% 26.4% 6.0% Graduating Seniors As with the accepted applicants, graduating seniors used e-mail extensively on a regular basis during their senior year. Over 91 percent identified communicating by email with other Tufts students at least once per week, and almost 80 percent communicated with students at other colleges at least once per week. Only 52 percent, however, communicated with faculty at least once per week. And while over 95 percent used the Internet for research or homework, only 44 percent had participated in an academic listserv. A significant number of graduating seniors (76.9%) also had not participated in an electronic chatroom (see Table 7). 71 TABLE 7 USE OF ELECTRONIC COMMUNICATION BY STUDENTS DURING SENIOR YEAR Daily 2-3 times per week Once per week 1-2 times per month Never 1. Communicate with Tufts faculty 5.3% 18.8% 28.0% 45.1% 2.8% 2. Communicate with Tufts students 61.3% 21.1% 9.0% 5.7% 2.9% 1.6% 2.1% 3.6% 19.3% 73.4% 4. Communicate with students at 36.4% other colleges 27.1% 16.4% 14.8% 5.3% 5. Communicate with friends 48.6% 25.2% 13.9% 9.7% 2.7% 6. Communicate with family 23.0% 25.7% 20.0% 18.5% 12.7% 7. Participate in an academic listserv 8.4% 9.8% 10.8% 14.6% 56.5% 21.6% 31.1% 20.2% 22.9% 4.1% 9. Participate in electronic chatrooms 4.4% 3.1% 4.2% 11.4% 76.9% 10. Other Internet use 52.3% 21.9% 12.8% 7.8% 5.1% 3. Communicate with faculty at other colleges 8. Use the Internet for research or homework Conclusions Results of this study verify the extensive use of and reliance on electronic communication through the Internet by college-bound students and by students currently enrolled at Tufts. Both accepted applicants and graduating seniors use e-mail extensively to communicate with friends and other students, and a majority of college-bound students are now using instant communication on a regular basis. All groups are also using the Web extensively as an information source and to conduct research and complete 72 homework assignments. Virtually all accepted applicants noted the great importance of a campus’ e-mail and Internet capabilities. Students are not, however, using the Internet extensively to communicate with Tufts. Less than one-third of applicants communicated by e-mail with the Admissions Office during the application process, and less than one-fifth of applicants submitted an electronic application. And while the submission of an electronic applications has added risks, such as the uncertainty of the technology in submitting a competitive application or the use of a credit card online, the fact that over half of the accepted applicants had shopped online shows that they have experienced the technology and have addressed those risks. Graduating seniors also noted much less e-mail use to communicate with faculty than they used to communicate with other students. Results also show that there is a lack of interest, among both groups, in web-based group activities. Accepted applicants did not rate online discussion groups as an important capability on campus, and graduating seniors did not participate in listservs or chat rooms to a large extent. Implications These findings demonstrate that currently there appears to be a gap between the general use of electronic communications among undergraduates and college-bound students and their use of this technology to communicate and interact with the institution. At this time there is relatively little information available to help faculty and administrators to understand why these gaps exist. Identification of such a divide may serve as an impetus to explore whether current policies, practices, or infrastructure are impediments to electronic communication. In addition, the findings suggest that institutions my want to examine the use of newer communication technologies – such as IRC – and how they may be utilized to increase the degree of contact with current and potential students. Currently the information that exists regarding computer choices and computing skills of college freshmen is very limited (Olsen, 2000). If institutions are to be effective in providing electronic forms of communication opportunities, the time has come when the higher education community must obtain a clearer understanding of 1) students level of computer use proficiency, 2) their preferences regarding electronic communication, 3) factors that prevent the use of electronic communication with specific populations, and 4) reasons that preclude a subset of the population from utilizing these new technologies. Future survey research endeavors at Tufts will include additional items designed to shed additional light on questions surrounding use and non-use of electronic communication. 73 References Chidley, J. Cybertime: living by the credo ‘boot up, log on and connect,’ university students are mounting a techno-revolution. (November 25,1996) Maclean’s 109, 68-69. Duin, A. H. & Archee, R. (1996). Collaboration via e-mail and Internet Relay Chat: Understanding time and technology. Technical Communication 43(4), 402-412. Fishman, B.J. (1999). Characteristics of students related to computer-mediated communications activity. Journal of Research on Computing in Education 32(2), 73-97. Frand, J.L. (2000). The Information age mindset: Changes in students and implications for higher education. Educause Review 35(5), 15-24. Green, K.C. (1998). Campus Computing, 1998. The Ninth National Survey of Desktop Computing and Information Technology in American Higher Education. Report issued by Campus Computing, Encino, CA. Green, K.C. (1999). Campus Computing, 1999. The 1999 National Survey of Desktop Information Technology in American Higher Education: The Continuing Challenge of Instructional Integration and User Support Report issued by Campus Computing, Encino, CA. Milliron, M. and Miles, C. (2000). Education in a Digital Democracy: Leading the charge for learning about, with and beyond technology. Educause Review 35(6), 50-62. Olsen, F. Campus newcomers arrive with more skill, better gear. The Chronicle of Higher Education, November 3, 2000, http://chronicle.com/free/v47/i10/10a03901.htm. Piirto, J. (1998). University student attitudes toward e-mail as opposed to written documents. Computers in the Schools 14(3/4), 25-32. Ritter, M.E. & Lemka, K.A. (2000). Addressing the ‘seven principles for good practicein undergraduate education with Internet-enhanced education. Journal of Geography in Higher Education 24(1), 100-108. Selwyn, N. (1998). The effect of using a home computer on students’ educational use of IT. Computers & Education 31(2), 211-227. Singleton, S. and Mast, L. (2000). How does the empty glass fill? Educause Review 35(6), 30-36. U.S. Department of Commerce, National Telecommunications and Information Administration. Falling through the net: toward digital inclusion. Washington, DC, October, 2000. 74 Windschitl, M. & Lesehm-Ackerman, A. (1997). Learning teams students and the college e-mail culture. Journal of the Freshman Year Experience & Students in Transition 9(2), 53-82. Zagorsky, J.L. (1997). E-mail, computer usage and college students: A case study. Education 118, 47-55. 75 76 DEVELOPING A WEB-BASED VERSION OF THE COLLEGE BOARD’S ADMITTED STUDENT QUESTIONNAIRE™ Ellen Kanarek Vice President Applied Educational Research, Inc. Introduction The Admitted Student Questionnaire™ (ASQ) and the Admitted Student Questionnaire Plus™ (ASQ+) are college choice surveys sponsored by the College Board. The ASQ, first offered in 1988, asks students to compare the college that mailed them the survey with the set of other colleges they seriously considered attending. The ASQ+, begun in 1992, asks students to name and rate two specific colleges in addition to the one initiating the survey. More than 200 colleges participate in the service each year, of which more than 85% have participated more than once. Despite the rise of Web-based surveys, the paper ASQ has shown no dropoff in participation. Nevertheless, the handwriting appears to be on the monitor, and the College Board has begun to consider whether the paper ASQ should be replaced, or at least supplemented, by a Web-based version. This paper describes the Board’s first efforts to evaluate the feasibility and effectiveness of an on-line ASQ+. Development of the project The first discussions about a Web-based ASQ+ began on January 17, 2000 between Applied Educational Research (AER) and Logicat, Inc. (both of which are subcontractors for the College Board on this project). Things moved very quickly after that to plan a pilot study that could be conducted during the current ASQ/ASQ+ cycle. Logicat had the responsibility to create the actual Web-based survey and to work out the details of hosting it, collecting tracking and evaluation information, and converting the data into a format that would be compatible with the ASCII files that are currently used in the analysis of ASQ/ASQ+ data. AER’s role was to act as liaison with the colleges chosen as participants in the pilot study. Three participating colleges were recruited. All three institutions conducted an ASQ+ study in 1999, and would normally not have participated in 2000 (College A and College C tend to participate every other year, and College B every three years). The contacts were asked to participate in a regular (i.e., paper) ASQ+ survey, with the added wrinkle that their admitted students would be offered the opportunity and encouraged to complete the survey on the Web. All of the study costs that would normally be paid to the College Board (questionnaire printing and processing, participation fee, local question fee, data CD) would be borne by the Board, leaving the pilot colleges with only the actual mailing costs to cover. 77 The target date for the survey to go “live” (i.e., be available to students on-line) was June 1. The programmers were able to write the basic survey in about three weeks, and the time between that initial version and June 1 was spent refining the instrument and data verification procedures. During that period the three pilot colleges were asked to provide lists of Social Security numbers and names of the students who would be authorized to enter the Web site. They were also directed to prepare email lists for the students they would survey. An email message would go out once the site was ready telling the students about the ASQ and providing a direct link to each college’s survey site. Study Design Issues In an interest survey conducted by email in early 1999, the greatest concerns about a Web ASQ expressed by recent ASQ participants were whether they would be able to get as much information from a Web version as they currently get from the paper survey, and whether response rates would suffer with a Web survey. While there were many variables that could be studied in the pilot -- access to the Web and to computers, response rates, completion time, necessity for and effect of incentives, participating colleges’ access to their ASQ data, ASQ vs. ASQ+, products (reports), etc. -- in order both to keep this pilot manageable given the short timeframe for development and to minimize ambiguity of results, this pilot was to have very limited objectives. In particular: • • • • • The reports would be identical to those the colleges currently receive. The pilot would use the ASQ+ only (and not the ASQ). The survey itself would mimic the current ASQ+ as closely as possible, given the use of survey techniques appropriate to the Web. Pilot colleges would be drawn only from the pool of past ASQ+ users, in order to have baseline response rates for comparison. The Web survey would not permit colleges to ask “local questions” (extra questions devised by the colleges) at this time. Questionnaire Design Issues In general, it proved quite easy to translate the ASQ+ survey to the Web. Each page of the paper survey became a separate Web page; clicking on a “next” button brought up another page. Students were able to go back and forward between pages, and could leave the survey altogether and return later until the point when they clicked on the “submit” button. The final page of the regular survey was followed by a few questions evaluating the survey process. The “submit” button was located at the bottom of this last page. The section of the survey that presented the greatest programming difficulties was that dealing with colleges to which the student applied. The principal concern, of course, was to design a survey form that would encourage the students to fill it out completely. 78 What then was the best way to collect information on colleges applied to by the responding students? The paper survey asks students to write in the name of the colleges to which they applied, along with the schools’ city and state (to aid in identification of the colleges at the data entry end). One alternative for the Web survey was to have the students do essentially the same thing: type in the name and location of the schools to which they applied. The major alternative to that method was to provide a drop-down list of institutions from which the student could select the ones he/she wanted to include in the survey. The latter method had the advantages that the students would have less information to enter themselves, and that there would be a significantly smaller chance of data entry errors. On the other hand, there was the question of how much patience students would have to search through up to 12 drop-down lists, i.e., whether they would give up at some point in that process and fail to complete the rest of the questionnaire. Even if the students were asked to enter each college’s state, so as to limit the number of schools included in the drop-down lists to those in the state specified, they could still be presented with a list of over 200 schools if they entered “California,” for example. It could, in fact, require more time for the student to complete this portion of the survey using drop-down lists than using a more old-fashioned write-in method. The author felt very strongly that the write-in method was preferable, and the programmers agreed to program the survey accordingly. Once the write-in method was agreed upon, the burden of verifying the accuracy of the students’ entries fell primarily on the programmers. Due to time and budget constraints, the initial program looked for an exact match between the student’s entry and the name of the college as stored in the College Board’s Annual Survey of Colleges (ASC) data file. Anything less than an exact match would produce an error on the college entry “validation page” that was included as part of the Web survey. (For any entry that did not produce an exact match, the students were presented with a drop-down list of institutions in the given state that began with the same letter as their entry.) Based on past experience with the paper survey, where students frequently write in nicknames such as “UVM,” “Sewanee”, or “Ole Miss,” the college list was expanded to include as many nicknames and variations on the official name as possible. While the matching difficulties seemed to have been resolved before the site went live, real time student entries demonstrated that the lookup process would still have to be revised. For example, if the University of Illinois at Urbana was entered by the student as the University of Illinois, Urbana, or the University of Illinois at Urbana-Champaign, or the University of Illinois-Urbana an error message would be generated, even though to the student’s eye these were clearly the same institution. The programmers decided to go back and modify the programming to ignore punctuation and such prepositions as “at”, “of”, “in”, etc. They accomplished this very quickly, thereby reducing the number of false errors in student entries, but the problem was not eliminated entirely. Ultimately, if the student could not or did not correct the entry on the validation page, the apparently erroneous entries were saved in a separate file to be dealt with at the time of analysis. 79 The three pilot schools were given access to the test sites at the beginning of May. They all thought that the survey looked good, and had no suggestions to offer. Web Administration of the ASQ+ The target date for the Web site to go live was Thursday, June 1, 2000. The three pilot colleges were directed several times not to notify the students about the site until they received the go-ahead from the programmers, through AER. In addition, they were reminded several times that the paper survey should be mailed about a week after the email notification to the students, in the hope that this would maximize the number of students opting to respond via the Web. In the weeks prior to June 1 the schools were asked to provide lists of students who would be permitted to access the Web ASQ+. The information requested consisted of Social Security Number, to be used as the user ID, and the student’s first name, to be used as the password. A few students appeared to use their middle name more frequently than their first name, and the access information was adjusted accordingly. Prior to the grand opening Logicat also developed a “report” site, which would provide information on the site activity overall and for each school. There was also a page summarizing the students’ evaluations of the Web ASQ+. The first major problem surfaced about a month before the target date: College A discovered that applicant email addresses were not stored in a central electronic file, but merely in each student’s individual file. Thus there was no way to send out a mass email notifying students about the Web survey: College A would only be able to do the paper mailing. Since it was too late to recruit another school for the pilot, and since College A had already ordered and received its paper surveys, there was nothing for it but to continue as best we could. The college contact was instructed to make sure that the enclosure notifying the students about the Web option was printed on a separate, eyecatching sheet of paper. On the bright side, this study would provide some indication of the students’ willingness to take the extra step to go on-line specifically for the purpose of filling out the ASQ+, rather than simply clicking on the direct link to the survey that was to have been part of the email notification. Over Memorial Day weekend the programmers worked on transferring the survey to the external server they had chosen. During this period College A’s study produced another problem: the contact had decided that since College A would be doing a mail survey only, the mailing should take place before the site went live, so that students would be notified about the site in time for them to access it as soon as it was available. The introductory letter was dated May 24, and stated that the site would be available ”at the end of May.” In fact, some students apparently attempted to access the site as soon as they received the letter. The programmers discovered that questionnaires were being submitted while the site was still being tested and hosted at Logicat; those entries would have to be transferred to the “real” host later on. 80 On Friday, June 2, notification was received that the site was up and running for the students, and the contacts at the other two colleges were immediately sent the go-ahead for their mass student email. Activity was heavy for College C and College A over that first weekend, but very light for College B. Some students who experienced problems emailed the contacts, who forwarded the messages to AER. The messages were then passed on to the programmer, who responded very quickly. Some of the initial problems occurred in the section relating to colleges applied to, resulting in some of the programming changes described above. In a few cases student ID’s had not been included on the list of authorized respondents, but they were added as soon as the omission was discovered. At the beginning of the following week, activity was still almost non-existent on the College B site, raising concerns that there was a problem. The contact at College B also began to forward messages from students saying that their user ID (i.e., their SSN) and/or first name was not being accepted. On June 8 the contact sent an explanatory email: in a nutshell, the problem was that the lists of accepted and withdrawn students that were originally sent to the programmers did not match the lists of students who were sent the email notification about the Web option. College B produced new lists, and activity on that site immediately picked up. It should also be mentioned that the contact was concerned that non-enrolling students in particular were deleting messages from College B without reading them, and asked that the followup email come from AER. The final major problem encountered affected College C more than the other two. There was a bug in the survey that only manifested itself once the survey was moved from Logicat to the ISP server. Since a number of students logged on earlier than the programmers had anticipated, there wasn’t adequate time to test the survey on the ISP’s server. For more than 100 College C students, data from the sections on importance and quality ratings appeared to have been lost. (These students had gone all the way through the survey and clicked on the submit button, so it seems likely that they did finish, rather than submitting a survey with a lot of missing data.) This also occurred for about 40 College A students, but College B’s respondents were unaffected because the problem was resolved by the time they were able to access the site. The best solution that could be devised was to reopen the surveys of the students affected (remember that they had locked their data originally by “submitting” their responses). Logicat compiled a list of the students to be contacted, College C provided their email addresses, and they were all sent a message explaining the project and the problem and asking for their helping in entering the missing data again. Although it was difficult to tell exactly how many students responded to this appeal, 40-60 appeared to have done so. Response Rates The results of the three ASQ+ studies have not been tabulated, but it appears that for College A and College B the response rates for the combined Web and paper versions of 81 the survey are close to but slightly lower than they were for the paper survey alone in 1999. College C shows a higher response rate. Table 1 compares response rates at these three schools for all ASQ+ studies done since 1992. Note that the wide fluctuation in enrolling response rates at College B is due in part to timing of the survey’s administration: in the years when this rate was over 85%, College B administered the enrolling surveys to a captive audience at Freshman Orientation. Mailed surveys produced a much lower response. Table 1: ASQ+ Response Rate History College A NonEnrolling Enrolling 1991 1992 1993 1994 1995 1996 1997 1999 2000 (total) 2000 (paper) 2000 (Web) 82% 70% 89% 79% 76% 57% 56% 44% 56% 48% 39% 33% Total 66% 55% College B NonEnrolling Enrolling Total 57% 88% 32% 37% 45% 62% 98% 41% 70% 95% 24% 60% 59% 51% 49% 24% 37% 50% 43% 25% 35% 41% 31% 20% 26% 9% 11% 5% 9% 66% College C NonEnrolling Enrolling Total 83% 31% 49% 69% 29% 46% 64% 56% 38% 26% 54% 41% 49% 50% 25% 37% 12% N.B. None of these colleges participated in 1998. Enrolling/non-enrolling breakdowns are not yet available for Colleges A and C. Evaluation The contacts at the three participating colleges were asked to evaluate their experiences with the Web ASQ+ using a questionnaire emailed to them at the beginning of September. See Table 2 below for a summary of responses. It would be difficult to draw any conclusions on the basis of the these three studies only, since no college carried out the agreed-upon plan: email notification to students approximately one week before the paper mailing went out (including notification of the option to complete the survey on the Web), with email and paper follow-ups sent subsequently. Despite the shakedown problems at the beginning, all three contacts seem to have had a positive experience with the Web ASQ+. Once the final response rate information is available, however, they will be asked again. College A, in particular, has been quite concerned about the decline in non-enrolling response rates, and was hopeful that a Web option would alleviate that problem. 82 Table 3 summarizes student evaluations of the Web ASQ+. In general, the survey did not require much time at all, and was not perceived as too long or difficult to complete. Most of the students would fill out another survey like this one on the Web, and strongly prefer Web surveys to paper surveys in general. It is interesting that 28% of the people who received paper surveys said they did not fill any of them out. That figure, if accurate, represents an important target group for a Web-based ASQ+, especially if those are non-enrolling students. From AER’s point of view, the project went surprisingly easily for those students who chose the Web option, but the response rates were very disappointing. At this point, however, it would be very difficult to attribute the low response to any one thing: there was some type of problem – and a different problem -- with each of the three studies that might have discouraged the students. Table 1 shows that ASQ response rates, especially from non-matriculants, have been declining steadily. The combination of paper and Web surveys may have helped slow this decline, although we do not have enough information to document this impression. Recommendations On the basis of both college and student comments it is clear that a Web ASQ is desirable. It is not clear, however, that the Web ASQ should completely replace the paper version. For one thing, good, working email addresses are still not universally available, at either the student or the college level. For another, questions about the make-up of the Web respondents have yet to be answered: are they representative of the total group of ASQ respondents in terms of both demographics and attitudes? Third, the issue of how to price a Web-based study has yet to be addressed. Nevertheless, the satisfaction levels of this year’s pilot colleges and their students were high enough to encourage continued development and testing. A second, expanded pilot would provide much more information on what to expect: the types of problems we could encounter on a regular basis, response rates from different types of institutions conducting the study somewhat differently (e.g., with the initial mailing/notification in July or August, rather than May or June), ease of dealing with the data at both the collection and analysis ends, etc. 83 Table 2: Participant Evaluations of the Web ASQ+ College A College B Univ. College C Students admitted for fall 2000 6587 2857 7913 Expected to enroll 2246 1472 3944 Total surveyed 6375 1757 2999 Percent of total admits with good email No systematic email 74% enrolling 66% non-enrolling 79% enrolling 83% non-enrolling Percent of students surveyed with good email No systematic email 70% enrolling 61% non-enrolling 78% enrolling 84% non-enrolling No Yes Yes None Two One NA 2 weeks apart 3 weeks later When was paper survey mailed? First week in June June 7, 2000 First week in July Paper mailing included advisory about Web option? Yes Yes No Paper followup mailing with second questionnaire? Yes Yes, to non-enrolling only 2nd mailing yes, but not with second questionnaire 2nd mailing included reminder about Web option? Yes Yes Yes Sent an initial email about the ASQ? Number of email reminders When? 84 Problems “Just a couple of students had trouble” How were problems handled? “We forwarded the e-mail to (AER). The problems were diagnosed and” (sic) Comments from students? “A couple of positive comments. They said the web survey was easier and quicker to complete than the paper.” Did students ask or comment about incentives? No “There was a slight mix up at the beginning with the social security numbers used to gain access to the Web ASQ. The mistake was on our end. Our office did not provide a complete list of SSNs to ASQ and thus, when the email and paper requests were mailed, some students could not gain access to the Web ASQ because ASQ did not have correct student information. Some students did not complete the entire survey. We are not sure why they did not finish the survey. However, we sent these students email reminders to log back in to the system and asked them to finish the survey.” “We sent a new, complete list of SSNs to ASQ and emailed the students who responded to our initial mailings with access problems. This was easily corrected and some of the students returned to complete the Web ASQ.” “Yes. Some non-enrolling students were negative and did not want to be bothered with emails. We replied to these negative emails with a note of thanks and best wishes on their future. We then removed them from all future mailings.” No 85 “-- Changed SSNs at last moment (only a couple) -- First people could only fill out first page” “Quick email to (AER)” “Usual group of appreciative comments from non-enrollees who assumed we only wanted ASQ comments from enrollers” No Respondent’s opinion Suggestions Preferred method for future studies “The web version of the ASQ is easier and faster to complete than the paper version. I thought it was set up in a way easy for students to quickly understand what is requested. It would be nice if we could find out if the students who completed the online version were students who would not have completed the survey otherwise.” “For College A, having e-mail addresses (next year) will help; also, a couple of colleagues suggested going out earlier with the e-mail, mailing of paper, and then follow-ups.” Paper with Web option Other comments 86 “Our office believes the Web ASQ is a great way to efficiently collect data. With growing student access to the web, this is quick and easy way for students to comment about the university. The lay out of the web version was fine and the ease was much better after we corrected the SSN problem on our end. Some students voiced concerns over using their SSN to gain access to the site.” “Change the way you provide access to the site for students. A few students seemed reluctant to use their own SSN. Is there another way? Maybe provide students the ability to create their own username and password? Just an idea.” Paper with Web option “This is a great service and will only grow with time. Thanks for giving us the opportunity to be a part of the groundbreaking service.” “I’d really like to push for its use @ College C earlier in the cycle – May, then June. I hope email owners are representative of the whole.” “The ‘market’ is so volatile that it’ll be hard to compare seniors from one year to the next. Each year is like a whole new generation in terms of sophistication and use of the Internet.” Web only Table 3: Student Evaluations of the Web ASQ+ 1. Approximately how much time did you spend in completing this questionnaire? 16.5 minutes 2. Would you say the amount of time spent is: Acceptable Somewhat too long Much too long 3. How would you rate the ease of entering your responses and moving through the questionnaire? Very Easy Fairly Easy About Right Fairly Cumbersome Very Cumbersome 4. 40% 32% 20% 7% 1% Would you complete another questionnaire presented on the web like this one if you were asked to by another college that had offered you admission? Yes, definitely Yes, probably No, probably not No, definitely not 5. 69% 27% 4% 23% 56% 18% 2% In the future, would you prefer to respond to this kind of questionnaire: In electronic form, like this one In paper form, like a typical questionnaire in the mail No preference 6. 82% 2% 15% Did you receive questionnaire(s) in the mail from any other college(s)? Yes No 42% 57% If Yes: Did you respond to: All of them Some of them None of them Total visits to Evaluation Page Total surveys completed 35% 37% 28% 882 772 87 Table 4: Survey Participation, by College College A College B College C Total Total number login users Average daily login users 612 4.2 149 1.0 370 2.6 1135 7.9 Total completed surveys Surveys completed/attempted Average daily submitted surveys Partially completed surveys 461 75% 3.2 151 108 72% 0.7 41 203 55% 1.4 167 772 68% 5.4 363 Number of users completing page 1 Number of users completing page 2 Number of users completing page 3 Number of users completing page 4 Number of users completing page 5 612 553 525 520 509 Average # completed pages per login 4.4 149 129 114 112 109 (90%) (86%) (85%) (83%) 4.1 88 (87%) (77%) (75%) (73%) 370 305 271 269 264 4.0 (82%) (73%) (73%) (71%) 1135 997 918 908 886 4.3 (88%) (81%) (80%) (78%) CREATION OF A SCALE TO MEASURE FACULTY DEVELOPMENT NEEDS AND MOTIVATION TO PARTICIPATE IN DEVELOPMENT PROGRAMS Arthur Kramer Director, Institutional Research New Jersey City University Summary A survey was administered to full-time faculty to assess their perceptions of the University’s professional development program. Frequencies of responses were reported and the questionnaire responses were put through a factor analysis to explore the underlying qualities guiding the responses. The survey results showed tenured faculty desire experts in the disciplines be brought to campus to present findings of either the latest research in the discipline, or findings on the best/newest ways to teach the discipline. Twelve factors emerged of which six were judged to be interpretable. Communication surfaced as a large factor guiding the responses. Although the factor was judged positively, there were some specific areas in which greater communication was desired. These areas include communication of policy changes, on and off-campus opportunities, and planning of activities. The other factors concerned teaching and assessment, meetings, short and long term funding, and planning and usefulness of previous activities. Recommendations for future research that include assessment of components of personality stimulating participation in professional development activities and assessment of funding of such activities were brought out. Introduction The area of faculty development in higher education has grown from the initial implementation of sabbatical leaves, at Harvard in 1810, to structured programs targeting individual growth and vitality. Often, the differences among the initiatives are based on the missions of the institutions implementing them: teaching institutions often emphasize keeping current in the discipline and instructional strategies, while research institutions are more concerned with the performance of state-of-the-art research (Clark and Corcoran, 1989). Clark and Corcoran (1989) also note that faculty members at different stages in their career require, and often anticipate, different types of programs. This is because the career development needs and expectations of new faculty embarking on a career are different from the needs of tenured faculty in mid-career, or planning for retirement. This is in concordance with the beliefs of psychologists who theorize about different stages of peoples (e.g., Erikson, 1950; Super, 1984). For this reason, Clark and Corcoran (1989) advocate for programs addressing not only effectiveness in the 89 classroom and research, but also those incorporating life transitions, such as career counseling and retirement planning. An assessment of the strictly professional career development activities utilized by colleges and universities found that most brought guest speakers to campus, used luncheon gatherings, and scheduled special retreats (Gullat and Weaver, 1997). Several studies spoke about the effectiveness of the aforementioned activities, and included aspects of communication between the administration and faculty, and between a professional development committee, comprised of faculty representatives, and faculty, as whole. The effectiveness of the latter committee being mediated by the composition of the committee, that is, committees made up of recruited faculty members who were campus leaders were seen as more affective than committees constructed of volunteers only. Another intervening factor was found to be level of administrative support. Its impact was seen on both faculty acceptance and effectiveness of the programs. Often, the type of institution and the institutional culture impacted the expressed satisfaction and effectiveness of the development programs and activities (Sikes and Barret, 1976; Overlook, 1994). Missing from the previous research is assessment of the magnitude of the desire held by faculty for any particular type of program or initiative. Nor was a comparison made between what programs tenured faculty wanted, and the desires of the non-tenured faculty. The current study attempted to assess the professional development program for fulltime faculty at a public teaching university, and create a hierarchy of the perceived usefulness of activities that have been implemented. A second goal of the study was to compare the differences between the tenured and non-tenured groups; and finally, a third goal was to initiate the construction of a scale to assess the factors underlying the faculty's responses. Method During the Fall 1999 semester, a questionnaire to elicit faculty input on the University's faculty-development activities was administered to the 240 full-time faculty. Sixty-seven usable surveys were returned, obtaining a response rate of 28%. The survey consisted of 37 items asking respondent opinion about activities, formats, communications and policies that are utilized in higher education. The respondent's task was to rate the usefulness, sufficiency, effectiveness, or satisfaction with a program, initiative or format that had been utilized in recent years. The levels of the descriptors were presented as five-point scales anchored at 1=not useful, not sufficient, etc., to 5=useful, etc. There was a third section asking about demographics: the number of development activities in which the respondent had participated in, in the past two years; number of 90 years teaching; number of years teaching at the University; highest degree and years since highest degree; tenure status; and department. The data were analyzed first in regard to percentages of responses received to the contingencies of the scales. Then, the data were put through a principal components (factor) analysis in an effort to discern the underlying dimensions. Demographics of respondents Sixty-two percent of the respondents were from the College of Arts and Science (n=34); 22% were from the College of Education (n=12); and, 16% were from Professional Studies (n=9). Twelve respondents did not provide this information. In looking at the institution as a whole, for the AY1999-2000 the respondents from the Colleges of Professional Studies and Education were slightly over-represented and those from the College of Arts and Science under-represented (the institutional proportions were 71% Arts and Science, 17% Education, and 12% Professional Studies). Faculty with tenure were also somewhat under-represented because institutional data files reveal 75% of the full-time faculty were tenured. Of the total sample, 60% were faculty with tenure. Tenure status Value Label Value Frequency Percent Valid Percent Cum Percent Tenured Not tenured 1 2 . 40 19 8 ------67 59.7 28.4 11.9 ------100.0 67.8 32.2 Missing ------100.0 67.8 100.0 Total Valid cases 59 Missing cases 8 Other data about the respondents reveal that 75% had either a Ph.D. or Ed.D., and over 60% had earned that degree 10 years or more ago. 91 Years since earning their highest degree Value Label Less than 10 years 10-15 years 16-20 years 21-25 years 26-30 years 31-35 more than 35 years Total Valid cases 58 Frequency Percent Valid Percent Cum Percent 22 7 10 8 2 7 2 9 ------67 32.8 10.4 14.9 11.9 3.0 10.4 3.0 13.4 ------100.0 37.9 12.1 17.2 13.8 3.4 12.1 3.4 Missing ------100.0 37.9 50.0 67.2 81.0 84.5 96.6 100.0 Missing cases 9 The average number of years the respondents had been teaching in higher education was 20 years; the median was 19.75 years, and the distribution was bi-modal at 10 and 30 years. The range of years was from a minimum of 2 to a maximum of 39. The median number of years teaching at the present university was 9.5 years and the mode was 30 years (11 respondents reported 30 years). The reported range of years at the University was from 1 to 35. A profile of the respondent is that of a faculty member with tenure who obtained his/her highest degree over a decade ago, and has been teaching for approximately 20 years, mostly at the present university; many earned their highest degree while teaching there. The Questionnaire Items The items asking about activities, formats, communications and policies required respondents to select the most appropriate choice from five-point, Likert scales. Depending on the wording of the question, the choices, ranged from, for example, "not useful" to "useful", or "not a need" to "very much a need". Similar rating scales were used with question about actual initiatives that occurred on campus. Descriptive statistics for these items are contained in the table below. They demonstrate faculty felt the most useful development activities were those that bring experts in the various disciplines to campus (Q5 and Q10), they were free to select their own professional development activities (Q22), and that activities to learn new classroom activities were useful (Q1). The respondents expressed a perceived insufficiency in the amount of money allocated for travel (Q23). 92 Descriptive statistics Q1-Q23 N Valid Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 Q14 Q15 Q16 Q17 Q18 Q19 Q20 Q21 Q22 Q23 Missing 63 63 64 65 65 66 66 66 65 64 65 66 67 66 58 59 58 59 58 59 59 57 59 4 4 3 2 2 1 1 1 2 3 2 1 0 1 9 8 9 8 9 8 8 10 8 Mean 3.79 3.00 3.72 3.54 4.23 2.94 3.32 3.29 3.11 4.22 3.31 3.77 2.55 2.65 3.50 3.07 2.69 3.31 3.00 2.85 2.51 4.02 1.95 Std. Deviation 1.31 1.34 1.23 1.34 1.14 1.43 1.13 1.31 1.21 .92 1.36 1.19 1.25 .97 1.23 1.31 1.35 1.42 1.08 1.23 1.19 1.01 1.21 93 Descriptive statistics Q24-Q43 N Q24 Q25 Q26 Q27 Q28 Q29 Q30 Q31 Q32 Q33 Q34 Q35 Q36 Q37 Q38 Q39 Q40 Q41 Q42 Q43 Valid 53 56 51 55 60 55 59 65 65 66 66 64 66 65 50 52 48 45 41 50 Missing 14 11 16 12 7 12 8 2 2 1 1 3 1 2 17 15 19 22 26 17 Mean 2.79 2.48 3.43 2.78 3.35 3.24 2.69 2.62 2.83 3.59 2.44 3.38 3.32 3.54 3.26 3.48 3.46 3.36 3.07 3.70 Std. Deviati on 1.29 1.32 1.19 1.26 1.25 1.29 1.33 1.33 1.33 1.08 1.34 1.24 1.19 1.17 1.26 1.16 1.09 1.38 1.39 1.23 Extreme scores, or accumulations at the extremes can affect parametric descriptive statistics of Likert scales. It is often useful to assess percentages of responses received to the various contingencies of the scales. For the purposes of judging satisfaction with activities and formats, the two lowest choices were aggregated because it was felt the distinguishing qualities between the selection of a "1" and "2", signifying degrees of dissatisfaction, are often difficult to discern. For the same reason, the selections of "4" and "5" were aggregated. The middle point, i.e., "3", was left unchanged (complete frequency counts can be found in the appendix). The activities judged to be the most useful were those about the faculty members' discipline and classroom activities and instructional strategies. %responding Not Useful 17.5 36.5 17.2 21.5 9.2 Useful Activities Classroom activities Assessments Instructional strategies Course content Research in discipline 94 %responding Useful 66.7 36.5 65.6 58.5 81.5 The format judged most useful was the one that brought an outside expert onto campus (which was found to be correlated with the desire for discipline-specific development), and it was also felt, more on-campus activities were needed at convenient times. The respondents said previous activities were scheduled at times that were not useful, but the on-campus retreats were helpful, nonetheless. They also felt interdepartmental meetings were useful, as were on-campus publications and University-wide workshops. Useful Formats Departmental gatherings/meetings University-wide workshops Inter-departmental meetings/workshops Those held within each college Brought experts on particular topic Provide for on-campus publication or displays On campus development activities On campus opportunities Workshop times convenient %responding Not useful %responding Useful 36.4 19.7 24.2 27.7 6.3 27.7 39.4 42.4 48.5 35.4 79.7 47.7 not needed 13.6 Needed 62.1 not sufficient 49.3 sufficient 25.4 not convenient 37.9 convenient 16.7 not helpful 22.4 Retreats with a focused, single theme helpful 58.6 The results show faculty desire more on-campus activities centering on unique themes, the most helpful of which are centered around the faculty disciplines, particularly new research in those disciplines. The respondents do not appear too desirous for more activities concerning group dynamics. Approximately half of the respondents were neutral12 to the question about 12 This is implied by summing the percents finding it useful and not useful. The data can be found in the appendix, as well. 95 classroom group dynamics sessions, and most thought team building activities were not useful. %responding not useful 27.1 43.1 Group Dynamics Classroom activities Team building activities %responding useful 35.6 27.6 Most of the respondents said the primary source of on-campus encouragement came from within their own departments rather than from faculty members in other departments. This makes intuitive sense since these are the people with which most of the professional conversations would take place. %responding not a great deal 23.7 32.8 Encouragement from members of my department from faculty in other departments %responding great deal 45.8 34.5 Respondents were evenly distributed in their response to the question about sufficiency of university support for professional development, and they did not feel their input was actively solicited when campus-wide development activities were being planned. A clear majority did not feel pressured to participate in any activities, evidenced by the 70% who said they were free to select their own activities. An indicator of the perceived insufficiency of support is also found in the responses to an item about the university's travel allocation, where 73% said it was insufficient. %responding not sufficient 35.6 Support for professional development Input solicited from University Select own development activities Traveling allocation 96 %responding sufficient 33.9 not actively 45.8 %responding not free 7.0 %responding free 70.2 not sufficient 72.9 sufficient 13.6 actively 22.0 Several questions asked about university policies on development. It was felt the one concerning travel to conferences/workshops was unfair (reinforcing the feeling that there is insufficient support) and the one about release time was unfair; the one about sabbaticals was perceived as fair. Overall, the university was seen as concerned about faculty development, but could be more generous in funding individual initiatives. Three questions allude to this: one on tuition reimbursement, one on travel, and one on release time. A question asking about the policy governing distribution of funds received positive responses, overall, as did the question on sabbaticals. An interesting component of these data is that between 30-40% of the respondents were neutral to many of the questions. The majority of respondents felt their input was not sought when planning development activities, and aspects of the communication processes, themselves, were judged to be unsatisfactory. The respondents felt there is insufficient communication of changes in policy and ineffective processes of communication, and that information about other faculty members who could be seen as a resource for development was not effectively communicated. University's policy Tuition reimbursement Travel to conferences and workshops Sabbaticals Release-time University demonstration for faculty development Distributing funds for development Communication Faculty input when planning development activities Communication of changes in policy 97 %responding not sufficient 37.7 %responding sufficient 28.3 not fair 48.2 17.6 36.4 fair 21.4 45.1 25.5 not concerned 25.0 concerned 50.0 not fair 27.3 fair 43.6 never asked 50.8 asked 30.5 not sufficient 47.7 sufficient 26.2 not effective 43.1 12.2 effective 32.3 56.0 does not communicate 54.5 does communicate 21.2 Faculty achievement recognition seldom 23.4 frequently 48.4 Communicates achievements of others does not 24.2 does 43.9 not able 18.5 able 58.5 Communication processes, generally Communicating opportunities to participate in development Communicating faculty as resource to campus community Able to pursue development goals Past Campus Activities Several questions asked about previous initiatives sponsored by the university. Respondents were again asked to utilize a five-point scale with a "1" signifying low effectiveness of the activity/satisfaction with the activity, and "5" being high satisfaction or effectiveness. The initiative found to be most effective/"satisfying" was the Separately Budgeted Research and mini-grants. The least satisfactory was the Fall workshops introducing campus people. SBR & Minigrants awarded to faculty Lunchtime presentations of Sabbatical, SBR, etc. Open, informal discussions with V.P. Carter Presentations of current research Full day April retreat Fall workshop introducing key campus people Low High 22.0% 62.0% 18.8% 56.3% 26.7% 46.7% 19.2% 45.7% 28.0% 44.0% 36.7% 39.0% Twenty-two respondents, one third, said they had not been involved in any of the university-sponsored activities, mostly because they did not have time. 98 If you have not been involved in any of the above programs/activities/events, please tell us why. Value Label Not interested Interested, no time Did not hear of activities Other Value Frequency Percent Valid Percent Cum Percent 1 2 3 4 . 3 10 1 8 45 ------67 4.5 14.9 1.5 11.9 67.2 ------100.0 13.6 45.5 4.5 36.4 Missing ------100.0 13.6 59.1 63.6 100.0 Total Valid cases 22 Missing cases 45 One question asked if the respondent felt there had been an improvement in the campus climate due to the development activities, to which 77% responded there had been at least some improvement. Do you feel that there has been an improvement in the campus climate in regard to faculty and professional staff morale and network opportunities as a result of faculty/staff development activities? Value Label No improvement Some improvement Much improvement Value Frequency Percent Valid Percent Cum Percent 1 2 3 . 14 34 13 6 ------67 20.9 50.7 19.4 9.0 ------100.0 23.0 55.7 21.3 Missing ------100.0 23.0 78.7 100.0 Total Valid cases 61 Missing cases 6 The survey also contained a question asking how many development activities the person participated in, in the past two years. The modal response was four (29.8% responding this way) with six being the next highest number of activities (22.8%). Several categories into which respondents could be grouped were contained within the questionnaire. One obvious category is the dichotomous one of tenured vs. nontenured faculty. Forty of the respondents were faculty with tenure (59.7%), 19 were nontenured (28.4) and eight (11.9%) did not respond to that question. A table of descriptive statistics of the questionnaire items, which compares the responses of the two groups of faculty was constructed. It reveals subtle differences in their responses, generally a few tenths of a point--the largest difference being about nine tenths of a point, which was received to the question about the perceived utility of full-day on-campus faculty 99 workshops. The tenured faculty were more satisfied with this format than the nontenured faculty (Q38). This mean difference was found to be statistically significant (t=2.21; df=42; p=.033)13. Statistically significant differences were found to responses to the question about tuition reimbursements, where the non-tenured faculty rated it more favorably then tenured (Q24) (t=2.45; df=3.72; p=.019); the question about fairness of sabbatical leaves (Q26), where the tenured faculty rated it more fair (t=2.67; df=32.78; p=.012); the question of the helpfulness of on campus retreats about a single theme (Q15) with tenured faculty finding those retreats more helpful (t=2.07; df=26.8; p=.048); the question regarding ability to pursue one’s own development goals (Q37), again where tenured faculty rated this higher than non-tenured (t=2.206; df=28.72; p=.036); and, to the question about the University’s allocation of money for travel to conferences, where nontenured faculty rated it more sufficient (t=-2.37; df=36.013; p=.024).14 Factor analysis The first 43 questions of the questionnaire were put through an exploratory factor analysis with oblique (oblimin) rotation15 (the other items captured demographic information). Rotation of the factor structure is a statistical mechanism for simplifying the emergent factors and aids in their interpretation. The interpretation itself is based on the unique contribution each item (i.e., question) makes to the emergent factors, that is, the magnitude of the items' loading (correlation) with the factor. This procedure identified 12 factors, which accounted for 75% of the total variance of these questions16. Of these factors, six were found to be stable, with stability judged tenable if four items correlated with the emergent factor (i.e., loaded on the factor) at equal to or greater than |.60|. According to Stevens, (1996), stability can be judged with this criterion regardless of sample size, and factors can be interpreted utilizing items that loaded on the factors at levels greater than |.36| for samples containing between 50 to 80 subjects. (The factor structure matrix is contained in the appendix.) Each factor, and a simplified name with which to interpret it, are contained in the following table, along with the respective questionnaire item numbers, statements, and loadings. (Only the stable factors are contained in the table). 13 In reporting the following t-test values, the more conservative values, obtained by not assuming equality of variance viable, is used, even if the Leven's tests of equality was not statistically significant. This is because the sample sizes were very different, and a large number of tests were performed. It was felt this would attenuate the possibility of rejecting a true null hypothesis. 14 The output for the Descriptive Statistics and Independent Samples Test may be obtained by contacting the author. 15 An oblique rotation assumes the component factors are correlated. This contrasts with an orthogonal rotation (i.e., varimax), which assumes independence of the underlying factors. 16 Each questionnaire item is a factor. The main concern of an exploratory factor analysis is identifying the factors to be retained, and the interpretation of the factors. Retention is based on the methodology suggested by Kaiser (1960), that is, the factor obtaining an eigen value greater than 1.0. An eigen value is a numeric value that consolidates the variance contained within a matrix, in this case the correlation matrix 100 Factor 1 Q18 Q27 Q28 Q29 Q32 Q33 Q34 Q35 Q36 Q38 Q41 Q42 Recognition and communication of faculty achievement I receive professional encouragement from members of my department. University policy on release time is fair. The University demonstrates concern for faculty professional development. Fund distribution for development is fair. Policies are communicated effectively. The University communicates opportunities to participate in development activities. Faculty expertise is communicated to the community. Faculty achievement is recognized by the University. Faculty and staff achievements are communicated regularly by the University. Full day retreats improve development. Informal discussions with administration improves development. Participation in fall workshop introducing key people on campus improves development. -.389 .418 .509 .384 .448 .422 .604 .857 .839 .501 .629 .459 Factor 2 Q1 Q2 Q3 Q4 Q7 Q8 Q10 Q12 Q16 Q17 Teaching and assessment The most useful activities are classroom activities. The most useful activities are centered around assessment. The most useful activities are instructional strategies. The most useful activities are concerned with course content. The most useful on-campus format is workshops/seminars. The most useful format is inter-departmental meetings/workshops. The most useful on-campus format is those that bring experts on campus. More on-campus development activities are needed. Workshops emphasizing classroom activities is useful. Workshops emphasizing teambuilding activities is useful. .830 .427 .890 .607 .362 .605 .408 .385 .463 .378 Factor 3 Q6 Q10 Q12 Q13 Q18 Campus and departmental meetings The most useful on-campus format is departmental meetings. The most useful on-campus format is those that bring experts on campus. More on-campus development activities are needed. There is sufficient on campus opportunities for professional development. I receive professional encouragement from members of my department. .794 -.624 -.716 .496 .596 Factor 8 Q22 Q24 Q26 Q27 Q28 Q29 Q33 Q35 Q37 Policy on funding for long-term development activities I am free to select my development activities. The U.'s policy on tuition reimbursement is sufficient The University policy on sabbaticals is fair. University policy on release time is fair. The University demonstrates concern for faculty professional development. Fund distribution for development is fair. The University communicates opportunities to participate in development activities. Faculty achievement is recognized by the University. I have been able to pursue my professional development goals while at NJCU. .779 .403 .724 .561 .652 .659 .431 .366 .526 Factor 9 Q20 Q23 Q24 Q25 Q26 Q27 Q28 Q29 Q30 Policies and funding for short-term development activities There is sufficient administrative support for prof. development The amount of money the U. allocates for travel to conferences is sufficient The policy on tuition reimbursement is sufficient The policy on travel to conferences and workshops is fair The policy on sabbaticals is fair The policy on release time is fair The U. demonstrates concern for faculty development The U.'s policy on distributing funds for development is fair The faculty are asked to provide input when development activities are planned -.534 -.875 -.616 -.903 -.437 -.597 -.573 -.693 -.374 101 Q35 The U. recognizes faculty achievement Q36 The U. communicates achievements of faculty and staff regularly Q41 Open informal discussions with VP of Academic Affairs (rating) Factor 12 Planning and usefulness of previous activities Q7 The most useful format for on-campus development activities is University-wide workshops/seminars. Q8 The most useful activities for on-campus development activities are inter-departmental meetings/workshops. Q9 The most useful format for on-campus development activities is those held within each department. Q11 The most useful format for on-campus development activities is those that provide for oncampus publication or displays. Q15 On-campus retreats focused on a single them are helpful. Q16 Workshops emphasizing classroom activities are useful. Q17 Workshops emphasizing teambuilding activities are useful. Q21 Faculty input is solicited when planning development activities. Q27 The U.'s policy on release-time is fair Q28 The U. demonstrates concern for faculty development Q29 The U.'s policy on distributing funds for development is fair Q30 Faculty input is solicited when planning development activities. Q31 There is sufficient communication of changes in University policy. Q32 Policies are communicated effectively. Q34 Faculty expertise is communicated to the community. Q41 Informal discussions with administration improves development. The largest factor accounted for 24% of the variance and was mainly concerned with the University's recognition and communication of faculty achievement. This factor demonstrates a decrease in perceived intradepartmental recognition as university-wide recognition increases. The second factor extracted was also judged to be stable, and it uniquely accounted for nine percent of the questionnaire’s variance. This factor was concerned with teaching and assessment. The third factor accounted for about eight percent of the questionnaire variance and was mainly concerned with on-campus departmental meetings. There were not a large number of items loading on this factor and it needs to be developed. But, it appears that with increasing numbers of departmental meetings, the desire for on-campus developmental activities decreases. The fourth factor judged to be stable was actually the eighth factor extracted. It accounted for about four percent of the questionnaire’s internal variance. It was called the factor on policy and funding long-term development activities. Items loading on this factor had to do with freedom to select one’s own development activities and the specific activities of release-time and sabbaticals. The fifth stable factor was the ninth one extracted. It accounted for three percent of the variance and was primarily concerned with satisfaction with policies and funding of 102 -.400 -.362 -.455 -.471 -.409 -.416 -.387 -.534 -.683 -.755 -.771 -.543 -.412 -.439 -.811 -.479 -.371 -.504 -.420 short-term development activities. These activities include tuition reimbursement and travel to conferences. The sixth stable factor was the twelfth one extracted. It concerned the usefulness and effectiveness of on-campus development activities and communications. This factor accounted for approximately two and one half percent of the questionnaire variance. The oblique rotation allows the correlation among the factors to be assessed. Several were found to be moderately17 correlated. The highest correlation was found between factors eight and nine r(8,9)=-.41--factor eight speaks about policies of long-term development and factor nine is about policies of short-term activities. The smallest correlation was between factors three and twelve, r3,12=.001, “campus and departmental meetings” and “planning and usefulness of previous activities”. Component Correlation Matrix Component 1 2 3 4 5 6 7 8 9 10 11 12 1 1.000 .097 -.080 .220 -.216 .027 .170 .179 -.286 .038 -.024 -.323 2 .097 1.000 -.247 .036 -.119 .120 .180 -.041 -.026 .039 -.065 -.341 3 -.080 -.247 1.000 .082 .059 -.170 .020 -.027 .010 .033 -.044 .001 4 .220 .036 .082 1.000 -.073 .139 .192 .124 -.206 -.119 -.137 -.145 5 -.216 -.119 .059 -.073 1.000 -.040 -.215 -.199 .198 .014 -.006 .320 6 .027 .120 -.170 .139 -.040 1.000 -.018 .058 -.049 .039 -.004 -.115 7 .170 .180 .020 .192 -.215 -.018 1.000 .049 -.178 -.071 -.075 -.292 8 .179 -.041 -.027 .124 -.199 .058 .049 1.000 -.410 .192 -.057 -.143 9 -.286 -.026 .010 -.206 .198 -.049 -.178 -.410 1.000 -.175 -.002 .263 10 .038 .039 .033 -.119 .014 .039 -.071 .192 -.175 1.000 .068 -.077 11 -.024 -.065 -.044 -.137 -.006 -.004 -.075 -.057 -.002 .068 1.000 -.015 Extraction Method: Principal Component Analysis. Rotation Method: Oblimin with Kaiser Normalization. The responses to items loading on the stable factors were summed creating scales, and the scale scores were aggregated for all respondents. Descriptive statistics for the aggregated scales of the questionnaire are below. They are presented for all respondents and separately for tenured and non-tenured faculty. Maximum scale scores varied in conjunction with the number of items in the scale, and higher maximums denote greater numbers of items comprising the individual factor. For example, "factor 1" (communication and recognition of faculty) contained 12 items and the maximum score is 60, which is based on the 5-point scales attached to each item. The average score on this factor was moderately high, 36.55, because the mid-point is 30. It is found that the highest score is on factor two--the factor dealing with teaching and, to a lesser degree, 17 Utilizing r=.30 as cited in Cohen and Cohen 1983 (using the formula t=r/square root (1-r2/n) correlations above ±.247 were found to be statistically significant.) 103 12 -.323 -.341 .001 -.145 .320 -.115 -.292 -.143 .263 -.077 -.015 1.000 assessment (number of items=8; maximum possible score=40; obtained average=34.94). Factor eight received the next most positive rating, i.e., policies on long-term activities such as sabbaticals (seven items; maximum possible score=35; average obtained=30.41). The most neutral responses were obtained on factors three "campus and departmental meetings" and 12 "planning and usefulness of previous campus activities". A multiple analysis of variance (MANOVA) was performed on the scales' scores using tenure status as the grouping variable. No statistically significant differences were found between the groups' factor scores. Similarly, no significant correlations were detected between the factors and length of time teaching at the university. Frequencies: All Faculty Statistics Factor 1 31 Factor 2 54 Factor 3 56 Factor 8 44 Factor 9 34 Factor 12 38 36 13 11 23 33 29 Mean 36.55 34.94 16.70 30.41 36.00 48.45 Std. Deviation 9.976 7.398 2.366 7.267 10.89 11.90 8 60 6 40 7 30 12 35 N Valid Missing No. of items: 12 Highest possible score Factor 1: Communication and recognition Factor 2: Teaching and assessment Factor 3: Campus and departmental meetings Factor 8: Policy on long term-development activities Factor 9: Policy on short-term development activities Factor 12: Planning and usefulness of previous activities 104 16 60 80 Frequencies: Tenured Faculty Statistics Factor 1 21 Factor 2 35 Factor 3 36 Factor 8 30 Factor 9 26 Factor 12 27 19 5 4 10 14 13 Mean 36.57 35.69 16.39 30.30 34.54 48.44 Std. Deviation 10.59 7.657 2.429 7.648 11.20 12.57 8 60 6 40 7 30 12 35 N Valid Missing No. of items: 12 Highest possible score 16 60 80 Factor 1: Communication and recognition Factor 2: Teaching and assessment Factor 3: Campus and departmental meetings Factor 8: Policy on long term-development activities Factor 9: Policy on short-term development activities Factor 12: Planning and usefulness of previous activities Frequencies: Non-tenured Faculty Statistics Factor 1 8 Factor 2 17 Factor 3 17 Factor 8 12 Factor 9 7 Factor 12 9 11 2 2 7 12 10 Mean 33.50 33.88 17.06 30.42 38.14 46.33 Std. Deviation 7.290 6.800 2.164 4.582 5.146 10.37 8 60 6 40 7 30 12 35 N Valid Missing No. of items: 12 Highest possible score 16 60 80 Factor 1: Communication and recognition Factor 2: Teaching and assessment Factor 3: Campus and departmental meetings Factor 8: Policy on long term-development activities Factor 9: Policy on short-term development activities Factor 12: Planning and usefulness of previous activities Significance tests on the obtained means, when compared to the expectancy of the midpoint of the respective scales, reveals all of the obtained means are significantly higher than the expected average for each scale (remembering that z=1.96 at p=.05). 105 Means and Standard Errors Factor 1 Factor 2 Mean 36.55 34.94 S.E. 1.79 1.01 z score 3.66 14.84 Factor 3 16.7 0.32 5.38 Factor 8 30.41 1.10 11.78 Factor 9 Factor 12 36 48.45 1.87 1.93 3.21 3.08 Discussion The sample was not truly representative of the university's full-time faculty in regard to college representation, however, the major divisions of Arts and Sciences, Education, and Professional Studies were represented in the appropriate order. That is, more respondents were from Arts and Sciences, the largest college, than from Education-the next largest, and Professional Studies-the smallest. The major discrepancy was the under representation of the College of Arts and Sciences, and the over-representation from the Colleges of Professional Studies and Education. There was also a slightly smaller percentage of tenured faculty who responded than exists on campus, but tenured faculty did outnumber the non-tenured. The responses revealed faculty felt more on-campus activities were needed, especially the type bringing experts in the academic disciplines to campus and those providing opportunities for publishing or displaying original work on campus. This could be the product of the greater number of tenured faculty responding because it was they, more than the non-tenured faculty, who expressed interest in these activities. The hierarchy of areas of interest list discipline specific presentations, classroom activities, and instructional strategies as the three highest priorities. There was interest in course content-specific programs, and little interest in programs showing assessment strategies, or team-work initiatives. The most desired format was that which brought experts to campus. Interdepartmental meetings were found to be useful, as were the on-campus publications, but to a lesser extent. Similarly, there was not great support for university-wide workshops or departmental gatherings, although the tenured faculty felt the university-wide workshops were more useful than did the non-tenured. There were no differences found between tenured and non-tenured faculty when asked about their perceptions of institutional support for professional development. Roughly equivalent numbers of both groups of faculty felt the institution did support the pursuit of professional development, and this type of perception has been found to be a motivating factor for individuals who have pursued development activities. Organizations that were supportive of professional development tended to have employees who participated at greater rates (Noe and Wilk, 1993). The majority of respondents felt they were encouraged most to pursue their professional goals from within their department and that they were able to select their own activities. According to Clark and Corcoran (1989) much of the developmental guidance a faculty member will receive comes from 106 within the department, whether it is on a professional or personal level. They continue, however, by saying that faculty in mid-career can develop “tunnel vision” if they have not had much contact with members of other departments. By saying inter-departmental meetings are useful, the respondents may be expressing a need for interaction with other departments’ members. Other areas in which differences between tenured and non-tenured faculty were found were in regard to sufficiency of tuition reimbursement, where non-tenured faculty rated it more sufficient, and sabbatical leave, and ability to pursue one’s personal goals, which were rated higher by tenured faculty members. When asked about the effectiveness of previous activities, the Separately Budgeted Research (SBR) and Mini grants were judged most effective by all faculty. The least effective was seen to be the lunch time presentations of sabbatical and SBR results. Unfortunately, most respondents did not feel their input was sought when development activities were planned. In analyzing the underlying factors, six were judged capable of supplying information about the survey responses, and no differences were found between the tenured faculty's responses and those from non-tenured faculty. The administration's communication with faculty surfaced as the largest underlying factor. The average score on this factor was moderately high. This is interesting because items making up the factor revealed perceived insufficiencies--communication of opportunities, numbers of opportunities, and solicitation of faculty input when planning development activities. In a similar context, there was a perceived absence of campus wide communication of faculty expertise, although faculty achievements were communicated. This high score may be suggesting a need for greater communication, since a large part of what was being responded to in the survey was this communication factor, and the areas just mentioned are where the communication is lacking. Reinforcing the university's mission as a teaching institution, the second factor, on development of teaching strategies, obtained the highest score. The factor about policies on long-term activities also received a high rating. The policies on short-term development were rated moderately high. The lowest levels of agreement were on the third and twelfth factors, which dealt with meetings and effectiveness of previous activities, respectively. The third factor, however, needs to be further developed, since there were few questionnaire items correlating with this factor. The addition of items would provide information concerning those aspects of meetings that were useful, and what to avoid in meetings whose agenda concern professional development. The twelfth factor contained all negative correlations. With the addition of different items, to provide positive loadings, and the deletion of some current items a clearer idea of the progress in obtaining faculty input into the planning process can be had. 107 Future research should explore the qualities of the individuals that motivate them to participate in development activities. The psychological constructs know as the "Big Five" have been assessed in regard to leadership style (Judge and Bono, 2000) and service jobs (Hogan, Hogan, and Roberts 1996). It has been found in organizational settings that the desire to learn is a highly motivating factor in employees' motivation to participate in development activities, as is the perception of the support from managers and peers, and self-efficacy (Noe and Wilk, 1993). How those qualities relate to faculty in different life stages may supply information on developing initiatives for members at various career points. The present study found encouragement emanates mostly from within the department, but, again, the individuals' personality, education, or history, as it interacts with the departmental culture may be the factor spurring them to participate in particular types of activities. Considering the results of the current study, it may be asked if level of self-perceived self efficacy in the classroom, extent of current knowledge in the discipline, or personal level of psychological/cognitive development mediate the motivation towards participating in development activities in higher education. One thing lacking from the questionnaire utilized in the current study was the rate of participation and perception of the retirement planning activities. The questionnaire did not even address this as a form of development. In the future, inclusion of this type of content is recommended. Boice (1997) gives reasons why empirical research of faculty development programs has not often been utilized in program development. He says developers often see measurement as something that gets in the way of "something that is already working", or administrators perceiving that the money could be better spent elsewhere. Another reason is that measurement of efficacy in the classroom is a taboo subject to address. When it is brought up, it creates uncomfortable situations, such as learning empirically that one may be able to do something better than he/she is currently doing it. And, when behavioral interventions are included they are met with resistance because they entail collecting data on current practices and monitoring those practices for improvement. The current study does not offer a solution to the latter finding. It solely attempts to establish the groundwork for creating an empirical basis for assessing the magnitude of desire for types and formats of development programs, and for quantifying the extent of perceived support from peers and administrators, and measuring participants' satisfaction with the existing initiatives. It is hoped that the results will be used in constructing future programs and in performing research on faculty development. One area the current study did not address is funding of development activities. A future study should assess the effects of the utilization of funds allocated for development activities. Such a study can address the nature of activities the funds were spent on, the number of participants in on-campus and off-campus activities, what the participants brought back to campus/classroom/laboratory from the activity, usefulness for career or future planning, and overall satisfaction with the activity. This can provide insight into 108 the usefulness of the spending so that greater efficiency in the provision of professional and personal planning can be created.18 References Boice, B. (1997). What Discourages Research Practitioners in Faculty Development in Higher Education: Handbook of Theory and Research, vol. XII. John C. Smart, ed. Agathon Press, NY. Cohen, J. and Cohen, P. (1993). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, Lawrence Erlbaum Associates, Hillsdale, NJ. Erikson, E.H. (1950). Childhood and Society. In Burke, L.E. Development through the Lifespan,1998, Allyn and Bacon, Needham Heights, MA. 1998. Gullat, E. and Weaver, S.W. (1997). Use of Faculty Development Activities to Improve the Effectiveness of Higher Education. ERIC document ED 414796. Hogan, R., Hogan, J., & Roberts, B.W. (1996). Personality measurement and employment decisions: Questions and answers. American Psychologist, 51, 469-477. Judge, T. A. and Bono, J. E. (2000). Five-Factor Model of Personality and Transformational Leadership. Journal of Applied Psychology, (85,5), pp751-765. Kaiser, H.F. (1960). The Application of Electronic Computers to Factor Analysis Educational and Psychological Measurement, 20, 141-151. Noe, R.A. and Wilk, S.L. (1993). Investigation of the Factors That Influence Employee’s Participation in Development Activities Journal of Applied Psychology, vol. 78, No. 2, pp291-302. Overlook, T.H. Sr. (1994). Assessment of Employee Perceptions of Present and Future Professional Development Activities at a Northern Maine Technical CollegeTrends and Issues in Vocational, Technical, and Occupational Education. ERIC document ED 373815. Sikes, W. and Barrett, L. (1976). Case Studies on Faculty Development. ERIC document ED 140700. Stevens, J. (1996). Applied Multivairate Statistics for the Social Sciences (3rd ed.). Lawrence Earlbaum Associates. Mahwah, NJ. 18 The Appendix and a copy of the Faculty Development Questionnaire may be obtained by contacting the author. 109 Super, D. (1994). A Lifespan life space perspective on convergence. In M.L Savikas et al. eds. Convergence in Career Development Theories: Implications for Science and Practice. Palo Alto, CA Coo Books. 110 THE TRANSFORMATIONAL POWER OF STRATEGIC PLANNING Marsha V. Krotseng Vice Provost, Director of Institutional Research & Analysis West Liberty State College Ronald M. Zaccari President West Liberty State College Introduction Strategic planning – thoroughly understanding an institution’s strengths and weaknesses and carefully charting future directions – is vital to the effective management of colleges and universities. It also is integral to institutional change. As the American Council on Education observed in a 1998 report, “unplanned change is risky.” The current challenge to higher education is to chart intentionally a desired future congruent with our values and aspirations” (p. 3). Thus, strategic planning and change (or transformation) are intricately interwoven. Given the high levels of action they demand, they also represent higher education’s dynamic duo. Planning without transformation is unproductive. There is no purpose in planning if nothing changes and the resulting plan lies on a shelf to gather dust. Likewise, transformation without planning disregards the institution’s mission and often leads in distracting directions. Objectives This case study demonstrates the critical connection between strategic planning and institutional transformation. It traces the development of a strategic plan for a public baccalaureate institution and discusses how this strategic plan is linked to the transformation that has occurred on the campus over a four-year period. The study highlights numerous changes that have resulted from the plan. Institutional Background This analysis chronicles the dramatic institutional transformation of a small, four-year public college since 1996. Throughout its 163-year history, West Liberty State College in West Virginia's Northern Panhandle has served many first generation students, providing an affordable education with its solid curriculum and dedicated faculty and staff. The college enjoys a rich heritage as West Virginia’s oldest institution of higher education. It offers an associate degree program in dental hygiene as well as a full range of baccalaureate degree programs in the schools of liberal arts, science and mathematics, education, and business administration. The campus holds accreditation by the North Central Association and in the specialized disciplines of teacher education, nursing, dental hygiene, clinical laboratory sciences, and music. 111 By the mid-1990s, enrollment had dwindled from a high of 2,554 in 1981 to 2,412. While teaching and learning were taking place, there was no sense of energy or excitement, and the college had settled into a comfortable routine. The new president who arrived in July 1996 quickly recognized that this routine would not move the college toward a vibrant and successful future. In fact, he understood that the institution's failure to change and adapt could place its very existence in jeopardy. Challenges and Opportunities The state of the college at that time is aptly portrayed by the image of its brick and wrought iron front entrance gate. Portions of the wrought iron had separated and were awry. Struck by a vehicle during a snowstorm, the gate was left in disrepair for months. This transmitted a negative message: If the college did not care about its main entrance to campus, did it care about its internal operations? The campus certainly remained accessible; however, it lacked the focus on critical details that distinguishes a mediocre, sleepy institution from one that is animated, of high quality, and clearly focused on its future. In particular, the new president inherited a campus with no strategic plan, no master plan, no facilities plan, and no systematic budgeting process. An existing field house had recently been demolished as a result of severe structural deficiencies, limiting physical education and wellness opportunities for students. Other infrastructure problems caused by poor construction and deferred maintenance were mounting. Although approximately half of the student body resided on campus, student activities of all types were minimal, and the college was deserted on weekends. The institution offered few, if any, special academic programs for students such as an honors program or freshman experience course. While the concept of the freshman experience course had been discussed at length, it never moved beyond this stage to action. A program of student outcomes assessment was non-existent despite the accrediting requirements of the North Central Association (and an upcoming visit scheduled for April 1998). Minimal computer technology was available for students and faculty; there was no infrastructure to support a campus-wide fiber optic network, and no computer labs had been installed. One of the college's premier academic programs -- dental hygiene -- was graduating excellent students despite the fact that the equipment in its clinic was twenty years old. In 1996 West Liberty State College had the highest percentage of tenured faculty in West Virginia public higher education. In addition, the average age of its faculty was among the highest in the state while the percentage of faculty with doctorates was the lowest, except for the community colleges. No provisions existed to reward faculty who displayed exceptional merit, and research and service were not considered important criteria in awarding promotion and tenure. Tenure did not involve a rigorous review and, in fact, was granted almost automatically. The college also had the highest ratio of FTE faculty per student in the state. Little or no ethnic diversity was evident among either faculty or staff. 112 Administratively, over twenty individuals reported directly to the president. Communication links between top administrators and the deans, department chairs, and division heads were very weak. Deans and department chairs were not asked to play an active role in managing the institution and met only infrequently with the provost. No women served on the president’s cabinet. Although a foundation existed, the college had only a meager endowment of approximately $1.4 million despite its extensive history, and no special levels of donor recognition had been established to acknowledge major contributors. Furthermore, the state legislature had recently adopted a major bill requiring all state institutions to exhibit greater efficiencies over the next five years in order to qualify for any increase in state funding. At the same time, the institutions were asked to provide more responsive programming for their students and increase the level of compensation for faculty and staff. In short, the college was required to do more with less. Business as usual was no longer an option. The Campus Process: Strategies and Solutions According to ACE’s 1998 report, On Change: En Route to Transformation, “intentional change requires strategies and behaviors that are quite different from those associated with unplanned change. . .It involves charting a deliberate course” (p. 1). Given its situation in 1996, the college needed to embark on a clear course of immediate, transformational change to remain viable as the new millennium approached. Fortunately, the college enjoyed some strong positive forces that enabled it to tackle these challenges. First and foremost was the new president whose compelling vision inspired the campus and community; his passion to transform the institution aroused strong support from a critical core of faculty and staff who deeply believed in the college and were seeking far-reaching change. Several intensely loyal foundation and alumni board members also demonstrated a commitment to transforming the institution. The state's mandate to increase salaries through strategic planning only reinforced such vital support. Within two months of his arrival, the new president initiated a broad-based strategic planning process involving all constituencies. Thirty-five individuals, including faculty, staff, students, administrators, and key community leaders, participated in the strategic planning retreat and ultimately produced a plan that would set the college on a visionary and productive course. The resulting document outlined an ambitious agenda for advancing the college on several critical fronts: teaching and learning, technology, campus life, community outreach, reorganizing the college, and creating a studentcentered campus. The twelve goals directly addressed the institution’s formidable challenges. Highly dedicated working groups intensely and systematically tackled each of these goals over the next several months. The initial strategic plan was completed and 113 circulated to the campus for comment in early 1997. As the president acknowledged in his March 10, 1997, letter to the campus community, “To integrate the plan into the campus mainstream [now] requires every person to embrace the relevance and benefits of innovations recommended in the Vision to the Year 2000 Report. . .Our plan and its implementation must be a product of participation broad enough to cause ownership and result in specific decisions and actions to move the organization toward its future.” Through broad involvement of campus constituencies and constant communication, he had initiated the process that would engender this ownership. Not only did the president communicate the strategic vision to the campus, but he also conveyed this emerging spirit of enthusiasm and excitement to the institution’s statewide governing board during a meeting on the campus. Addressing board members, he portrayed the college as “a sleeping giant on the hill” who is about to awaken and make its presence felt. Alumni, the foundation board, business leaders, local public school superintendents, and state legislators all heard the same exhilarating message. The Beginning of Transformation Invigorated, faculty, staff, students, and administrators targeted action steps toward meeting the plan’s specified goals and objectives. At the close of the academic year, an annual update of accomplishments was compiled and shared with the campus community. During Fall 1997, the strategic plan was reviewed and updated, removing initiatives that were completed and adding new institutional priorities recommended by the faculty, staff, student, and administrator representatives participating in the planning retreat. The number of strategic goals was reduced to the seven that are currently in place: Goal One: Create a student-friendly environment by enhancing the student’s well-being. Goal Two: Establish a more challenging academic environment. Goal Three: Market WLSC as a high quality, affordable institution of higher education. Goal Four: Generate, maximize and wisely utilize sufficient financial resources to fulfill the mission and vision of the College. Goal Five: Develop and maintain a campus climate that promotes optimal employee performance, teamwork, continuous improvement and excellence. Goal Six: Have in place the technology and communication infrastructure to support the mission and core values of WLSC. Goal Seven: Extend WLSC into the community to meet continuously changing needs of our customers. 114 In October 1998, the president proudly stated to the campus, “Planning and action. . .are now a matter of daily operations. The collegial effort involved in creating the Annual Operational Plan represents a commitment to vision and planning, hours of hard work by many individuals, and dedication to action. . .Share it with your colleagues and be sure we hold one another accountable for its successful implementation." He also charged the college to “move forward with deliberate action steps to turn these objectives into achievements.” Since that time, the strategic plan has become an effective tool for keeping the campus apprised of priority activities and for building the momentum required to continue the institution's forward movement. In virtually every presentation to internal and external constituents, the president cites the strategic plan. During the Founder's Day 2000 celebration, he observed that its vision "has helped us understand the challenges that are ahead and made us cognizant of the need to respond to need opportunities." This constant reference to the strategic plan, coupled with tangible results reflecting initiatives outlined in the plan, has made this document a highly effective mechanism for communicating progress at the institution. Deans and department chairs have been drawn increasingly into the college's decision-making process and are responsible for annually reporting progress on relevant initiatives in the strategic plan. During fall 1999, the president reviewed the recently updated strategic plan with deans and department chairs and then charged each department chair to discuss the plan with the faculty members in his or her area. The specific objectives identified in the plan also convey a very powerful message to political leaders and potential donors: "This institution is serious about planning and accountability, and it deserves your strong support." Based on the solid foundation articulated in the plan, the college has established an integrated planning process. The institutional budget plan is now directly linked to the strategic plan; through extensive budget hearings each spring, academic and administrative unit heads are called upon to justify their budget requests in relation to initiatives identified in the strategic plan. This increased level of involvement in planning and budgeting activities has heightened communication across campus and led to greater awareness of budget decisions. Over the past two years, West Liberty also has developed a ten-year campus master plan, a facilities plan, and a foundation plan that integrate with the strategic initiatives. Lending further coordination among these plans is the use of the same consultants to facilitate both the college’s annual strategic planning retreat and the foundation board planning process. The increased level of involvement in planning and budgeting activities has heightened communication across campus and led to greater awareness of budget decisions. As one department chair recently observed, “The strategic plan is a key document in driving the campus, including the budget and projects.” One of the major goals in the initial strategic plan concerned the need for administrative restructuring. Following deliberations with the deans and department chairs, this step was implemented in 1998. As a result, only eight positions (rather than 115 twenty) report directly to the president, and new hires brought three women to the cabinet. Seventeen academic departments were consolidated into ten. When the North Central Association visited the college in April 1998, the evaluators reported that “West Liberty State College has a mission statement that is well understood by students, faculty, professional staff, and support staff.” Their final report conveys a powerful sense of the exhilaration the team experienced at witnessing the tremendous changes that had occurred at the college in a short period of time. Highlights of their findings include: i The new president has brought a new sense of excitement, direction, professionalism, and impetus for change to an institution that was adrift for too many years. i West Liberty State College now has a new Strategic Plan that establishes goals and expectations of accountability at all levels. i The institution has faculty, staff, students, a Board of Directors, and alumni who are supportive of the spirit of change now present on campus. i Systematic efforts to reach out to the regional community through a number of initiatives such as the Science, Math and Research Technology (SMART) Center demonstrates the willingness of the college to be of service to its community. Evidence of Change Continuing evidence of the systematic and highly visible effects of integrated strategic planning emerged at the College's September 2000 planning retreat. All participants were asked in advance to identify the College's top three to five accomplishments since the initiation of strategic planning in 1996. It is significant that the final list compiled from over fifty responses recognizes the strategic planning process itself as well as a clear focus on the plan, the budget review process, and the master plan. Among these "Top Ten" achievements are: 1. Campus beautification - Master Plan 2. Construction of the new Academic, Sports and Recreation Complex 3. Technology Expansion 4. Focus on Students 5. Increased Enrollment 6. Strategic Planning/Budget Review Process/Enhanced Image 7. Computer Labs/Legislative Support/Increased Accountability 8. Clear Focus on Plan/External Funding/New Department Structure 9. Honors Dorm 10. Leadership and Vision/New Dining Services/Progress in Assessment/ Marketing Plan 116 Four years after the implementation of the strategic plan, the college is experiencing continued growth with its highest level of enrollment in nineteen years and the largest entering class since 1989. Students have acknowledged the new spirit; they are excited about the transformation they have witnessed, and some seniors who graduated last May expressed a desire to remain on campus for another year so they could enjoy further changes such as improved dining services and the $10.5 million Academic, Sports and Recreation Complex that was formally opened during a ribbon-cutting ceremony on Homecoming Weekend 2000. Student programs and activities have greatly expanded. An honors dormitory is filled with students, and the college is currently contemplating the creation of a second such residence hall for outstanding scholars. Faculty and department chairs have acknowledged responsibility for and assumed an active role in recruiting prospective students. Over $2 million has been dedicated to increasing faculty and staff salaries to more competitive levels over the past five years, and annual merit increases reward exceptional faculty initiative. New faculty hires are expected to hold a doctorate, and several faculty are currently completing doctoral programs. A creative severance plan offered several years ago enabled the college to review program staffing and to allocate its scarce resources more effectively. The president has established awards recognizing faculty excellence in teaching, research, and service. At a ceremony in March 2000, one faculty member expressed gratitude that research is no longer considered an “aberrant” activity at the college but, rather, an expectation. In addition, special presidential honors are accorded on rare occasions to employees or friends of West Liberty who have demonstrated extraordinary performance. Accompanying such activity is an increased emphasis on external research grant funding and a focus on development that has raised giving among alumni and friends of the college to new levels. Gifts to the college have increased by twenty-six percent or more in each of the past three years. A recent survey of donors attributed this support to “strong leadership” and the “sense of direction” provided by the strategic plan. A $1.87 million grant from the National Science Foundation funds a center that provides hand-on science education to five county school districts of the region, serving 625 K-6 teachers and over 15,000 students a year. This is one of only five such projects in the United States. In 1999 the college received additional grants totaling approximately $1 million, including a $129,000 contribution which has enhanced music education through state-ofthe art recording technology. The teacher education program is energized by a Professional Development School at one of the local elementary schools, one of only nineteen in the nation funded by Wallace Reader’s Digest. The state of West Virginia has recognized this new sense of excitement by selecting the college as the site for the Governor's School for the Arts beginning in 2001. West Liberty also received $185,000 from the Governor toward renovation of the outdated dental hygiene clinic, allowing construction of a state-of-the-art facility. The college is further energized by a recent $100,000 federal grant designated for use in planning an innovative new center for 117 instructional technology; this center will combine the institution's strong programs in science and mathematics education with those in communications, fine arts, and other disciplines to enhance instruction for undergraduates at West Liberty as well as for students in the public schools. A proposed new business information systems degree program, combining a solid background in information technology with business preparation, will benefit from this needed addition to the campus. The center also will offer professional development opportunities for public school teachers and will play a key role in the collaborative master's degree programs that the college is pursuing with area universities as a result of a new state statute. The campus is now wired to take advantage of technology, with fifteen computer labs available for student use. Grants from Verizon have extended Asynchronous Transfer Mode (ATM) connections to the campus as well as to the college's Warwood Center several miles away in Wheeling. Plans are underway to deliver college instruction to the region's high schools through video connections. Approximately thirty percent of the freshmen who entered in fall 2000 are enrolled in pilot sections of a new freshman experience course. A revised general education curriculum also was implemented this Fall, and several new specializations (including biotechnology and sports management) have been added to the curriculum. All academic departments have completed an assessment plan and are at various stages of refining and implementing techniques of measuring their goals. The first Faculty Symposium on Assessment held in October 2000 highlighted these goals, and speakers representing each of the four schools described some of the innovative approaches used in their departments. In the midst of this widespread change, over 200 faculty and staff members sponsored a full-page advertisement in the local newspaper congratulating the college on its numerous accomplishments since 1996. Among the forty-six items cited were: i Renewed Commitment to Excellence through Long-Range Strategic Planning and Comprehensive Assessment; i New Faculty Evaluation and Merit Pay Plan; i Expanded and Revitalized Faculty Development Program; i Newly Opened Lines of Communication to and from Faculty Senate and Staff Council; i Restructured academic units; and i Commitment to a “Students First” Philosophy. 118 Conclusions As the president predicted to the Board of Directors in late 1996, it appears that the “sleeping giant” has, indeed, awakened, and is beginning to make its presence known in the local community and region. West Liberty State College has quietly and effectively made a difference in many lives over the past 163 years. The institution’s accomplishments and potential are just now beginning to be recognized more widely. As a result of the on-going strategic planning process, the college has begun to transform and re-invent itself to better serve a rapidly changing world. By focusing on the seven major goals that comprise our strategic plan, we created a campus culture in which our customers -- students -- receive highest priority. We restructured our finances and launched major efforts to improve an environment for teaching and research using the powerful tools of information technology. As we embark on the new millennium, the campus has become a dynamic community; frequent written and verbal communications acknowledge the strategic plan; and faculty, staff, and students are energized and actively working to accomplish the future directions we have helped envision for our institution. The strategic plan laid the foundation for the dramatic transformation that has occurred – and that is still occurring -- by establishing a clearly articulated vision and much-needed direction for the college. The campus embraced the vision, gradually at first, but with increasing intensity as tangible outcomes were realized. The wrought iron and brick at the entrance gate have been repaired, and there is no turning back. As the American Council on Education report observes, change “is an ongoing, organic process in which one change triggers another, often in unexpected places. . .There is no point in time at which everyone can declare a victory and go back to ‘normal life.’” This statement is clearly evidenced in the new campus culture that has emerged at West Liberty State College. References American Council on Education. On Change: En Route to Transformation. Washington, DC: American Council on Education. 1998. Zaccari, R.M. “President’s Letter,” March 10, 1997. Zaccari, R.M. “President’s Letter,” October 16, 1998. 119 120 TO SHOW HOW WE CARE: COMBINING WEB-BASED TECHNOLOGY AND INTERNATIONAL STUDENT NEEDS ASSESSMENT Tsuey-Ping Lee Assistant for Institutional Research, Office of Institutional Research University at Albany, State University of New York Chisato Tada International Student Advisor, International Student Services University at Albany, State University of New York Purpose of Research According to the Institute of International Education (Davis, 1998), 490,933 international students were enrolled in U.S. colleges and universities during the 1998-99 academic year, indicating a consistent increase over the last 40 years. It was also noted that over 11 percent of the total graduate enrollment across the country was comprised of international students. Clearly the student population in U.S. colleges and universities has become more diverse and this trend is also observed at the University at Albany, State University of New York, a mid-sized public research institution. Over the past decade, the international population has steadily grown in response to the university's continuous commitment to fostering the international dimensions of the campus (University at Albany's Strategic Planning Committee, 1998). It is reported that the total international student enrollment was 616 in 1990, and the year 2000 yielded 857 international student enrollments, which is close to 6 percent of the total university student population (Office of International Education, 2000). Currently, 83 countries are represented among the international student population at Albany. International students are "non-immigrant" students who are authorized temporary visas for the duration of their full-time study in the U.S. and they must adhere to a number of strict federal rules and regulations which do not apply to U.S. citizens or permanent residents. International students are also individuals whose linguistic and cultural background are different from U.S. students. While the experiences of international students on campus might be similar to those of U.S. counterparts in some aspects, there are special needs among the international population, which must be addressed and served. To provide appropriate services to the international student population and facilitate a smooth transition from one culture to another, it is critical to monitor their needs and perform periodic needs assessment (e.g., Hammer, 1992; Lee et al., 1981; Selvadurai, 1991). In the coming years, an increase is expected in the international population at the University at Albany. Under these circumstances, it is vital for the university to know the 121 needs of international students and to examine whether or not our current services are satisfactory and meet international students' expectations. Additionally, as needs assessment is a continuous endeavor, selecting an effective research tool is essential. Nowadays, the Internet has been broadly used for college applications and registrations. Also, about 90 percent of college and university students in North America have ready internet access (Chidley, 1996; Terkla & Mcknight, 1998). The easy accessibility of the postage-free web-based survey may promote this type of research. In this paper, two issues were addressed. First, the perceptions of the international students about the serviced provided by Office of International Student Services (OISS) were examined. Second, web-based survey techniques were utilized in order to comprehend the strengths and weaknesses of this approach for possible future use. These issues are examined both in the literature and by this collaborative research project conducted by the Office of International Student Services (OISS) and the Office of Institutional Research (OIR) at the University at Albany in the Spring 2000 semester. In addition, we have learned that strong collaboration and communication among university units is a prerequisite for conducting a web-based survey research. This research project is an excellent example of inter-unit cooperation. Literature Review Needs Assessment and International Students Researchers have reported a variety of findings on international student needs assessment. Eid et al. (1989) surveyed the needs, satisfaction, and concerns of 85 international students attending Eastern Oregon State College. With a response rate of 90 percent, the 46-questions in seven different categories were analyzed by demographic variables to understand individual differences. The findings showed common needs and concerns that were also reported by international students in other colleges and universities in the U.S. International students felt their academic needs and interpersonal relationships with U.S. students were generally satisfactory; They wanted to develop more active interactions with the community; and sought more opportunity to improve their English speaking skills and to work on-campus. In another study, Hammer (1992) conducted a "needs assessment" project for the Office of International-Intercultural Student Services at the American University. A group of 231 graduate students (14 percent of the total international population) were interviewed and surveyed. The top needs were identified as follows: cultural variety in foods, employment opportunities, dealing with financial matters, and involvement of U.S. students in international activities. There were some overlaps in the findings of Hammer and Eid et al. (1989). A small-scale explanatory study by Luzzo et al. (1996) utilized an innovative method to determine the degree to which the needs of international students were being addressed by existing programs and services. During the last month of the fall semester, eight 122 undergraduate students answered a brief survey with 12 open-ended questions, and interacted with one another through a focus group interview. Interviews were videotaped to identify specific themes that emerged from the data. Their findings projected some of the findings in the previous studies: Overall, academic needs and interpersonal relationship needs were satisfied; Living in residence halls was a positive experience, but adequate variety in food was lacking. Studies by Lee et al. (1981) and Selvadurai (1991) discovered that the services typically provided to international students were both underutilized and perceived as ineffective. Lee et al. (1981) conducted a national scale study to determine the needs of international students. The sample of 1,900 international students from thirty U.S. colleges and universities with international student enrollments of over 300 was examined through a questionnaire organized along a number of categories (e.g., information needs, academic life needs, linguistic needs). In every category, needs were not met according to the students' expectations and it was strongly suggested that U.S. institutions need to take a closer look at international students' needs and construct their programs accordingly. Selvadurai's 1991 study revealed the similar finding. The researcher evaluated the adequacy of selected academic and personal services to international students at New York City Technical College of the City University of New York. The response from 137 students (response rate: 89 percent) to the 22-item questionnaire indicated the inadequacies in the overall services. The exception were the areas of personal services such as obtaining financial aid, counseling on immigration and tax matters, which attained minimum satisfactory levels. The researcher observed significant differences in the various opinions. As to academic services, male students were more satisfied than female counterparts; Oriental groups had more positive response than Middle Eastern/Asian groups; Spanish/French speakers showed more favorable responses than Hindi/Arabic and Chinese speaking students. In the personal service area, those who were proficient in reading English assessed services as adequate, while those with excellent reading skills assessed services as inadequate. Also, students with poor English speaking skills rated services as adequate, though excellent English speakers rated services as inadequate. Selvadurai (1991) pointed out that if different groups were chosen at different times, the evaluation of adequacy of services provided to international students might differ significantly. He suggested that the changing needs of international students at different points in time call for periodic needs assessment and following adjustments in services. Finally, Johnson (1993) examined the perceptions of international students at University of South Mississippi regarding the use and the effectiveness of services provided to international students. Seventeen international students were studied through Q-methodology, a type of factor analysis. The results drew three distinctive groups: 1) Dissatisfied non-users, 2) Selective users, and 3) Satisfied selective uses. There was no relationship between the length of time at the university and the use of the services. 123 Johnson suggested future studies to find out which demographic characteristics were predictive of the use of student services. From this review, this area of research provides various and divergent findings as well as methods. This is partly because international students are so diverse in their distinctive cultural backgrounds. Harris (1996) discussed that a "cultural perspective" approach in needs assessment increases the effectiveness of services for students who are dissimilar in culture from the dominant culture since it can lead to a more precise identification of factors that influence students' experiences and perceptions of the college environment. Additionally, there are arguments for the goodness of fit between qualitative research strategies and diverse population (Stabb, 1996). A mixed model of both quantitative and qualitative assessment may be desirable to get to the deeper individual perceptions as well as the broader numerical trends. The goals of this body of research have not been conceptually defined and the theoretical formulations have not been validated (Prieto, 1995). It is evident that this area of research is in a stage of development, leaving much room for continued work. Web-Based Survey Experiences Thus far, there is little higher education literature that discusses web-based survey experiences. A web-based survey was conducted by the Office of International Programs at Pennsylvania State University (Lynch & Wortman, 1999) to assess the needs of international students. This research showed that a web-based survey obtained a higher response rate compared with a mail survey conducted in 1997. This research also demonstrated that international students often asked for help from the International Student Office more with regard to practical needs (tax matters, travel documents, etc.) than for family or personal matters. This study did not attempt to analyze how differently international students interact with the International Student Office according to their different cultural backgrounds, nor did it address the pros and cons of their web-based survey approach. Several presentations at previous NEAIR conferences shared the technology of developing a web-based survey or suggestions for web-based survey research (Parrot & McKnight, 1998; Kelly, 1999; Palladino, 1999). These studies provided good examples of web-based survey design and administration as well as various technical resources in on-line survey design. However, they did not address other critical issues in the survey research process such as pilot testing, notification, confidentiality, and so forth. Therefore, more details on basic survey research issues and the complexities which need to be considered when conducting web-based survey approaches would serve Institutional Research practitioners well. 124 Research Method Subjects In the spring 2000 semester, there were 796 international students who were enrolled at University at Albany, State University of New York (Undergraduate: 144, Graduate: 597, Non-degree/Exchange: 55). These students represented 82 countries. 66 percent of the total international population was from Asia, followed by students from Europe (22 percent), North America (4 percent), Africa (2.6 percent), South America (2.3 percent), Middle East and Central America/Caribbean (1.5 percent), and Oceania (less than 1 percent). They were in 56 different academic majors. Students in the Intensive English Language Program were not included in this study. It should be noted that University at Albany has three main campuses - uptown, downtown, and east campuses and OISS is located in the uptown campus. Instrument A 22-item questionnaire was developed. A mixed model was used to support the "cultural perspective" approach. The two parts were: 1) Quantitative: to rate selected services provided by OISS. There were five options - "very effective" - "neutral" - "not at all effective", and 2) Qualitative: to write comments and suggestions about specific services and overall services. The construction of the 22 items for the questionnaire was guided by the work of Fraenkel et al. (1996). The reliability of the instrument was examined by using previous reliable surveys as a model. The validity was established through a series of pilot studies and the review by the staff at the Office of International Education and OIR. In the pilot study, five international students from different countries, levels of study, and academic majors took the paper-and pencil questionnaire administered by a graduate assistant. Oral feedback was provided on simplicity of the language, clear meaning of the questions, and relevancy of the questions. Additionally, these students were asked whether they would feel different if completing same questionnaire on the Internet. The Survey Process In this section, the complete survey process will be described and followed by the design of the on-line survey. Different from a conventional survey, the questionnaire was released on the web for data collection and then followed by a mail survey distributed only to those who did not respond on line. The survey instrument was first conceptualized and questions were designed using pen and paper. Once all of the questions were finalized, OIR started the web-page design for the survey. The web-page was designed in a more vivid way to make the survey fun and intriguing for the respondents. Figure 1 shows the sample web pages of the survey. 125 Figure 1. The sample web pages of the on-line survey OIR employed an Active Server Page (ASP)19 connected to two databases built in Microsoft Access to facilitate this on-line survey. One of the databases contained all of the international students' identification numbers as well as the date of their response. The other database was used to record survey responses. Students were required to key in their student identification number to successfully access the survey. Students who entered incorrect student identification numbers (ID's) were directed to a web page containing contact information for the survey administrator. The respondents' ID's were marked in the student-id database once the surveys were submitted and the date of response was automatically recorded in the database. This security system allowed us to verify that the respondents were from the target population and it also ensured that respondents could answer the questionnaire only once. The response date allowed the data manager to trace the trend of on-line responses over time. A "thank-you" page appeared right after a successful survey submission. The role of the OIR was not only to function as the data manager but also as data "guardian". Students were assured the confidentiality in a way which OISS could not see individual answers and comments with any identification. Another area of inter-unit cooperation was between OIR and Administrative Local Area Network (LAN) Services. The Local LAN Services set up the read/write access for OIR to be able to connect the survey and database with the university web server. A good historical working relationship and efficient communication between OIR and Administrative LAN Services expedited the processes of connecting the web-survey with the university web server. During the conversation between OIR and Administrative LAN Services, the survey's pilot testing was done on a personal server so that minor technical problems could be fixed before it was placed on the university web server. Once the survey and database were linked with the university web server, another pilot test was conducted to assure the connection between the survey and database on the university web server worked smoothly. As the technical details were being worked out, 19 Special thanks to Jr-Ping Daniel Yang (Programmer Analyst, Research Foundation, SUNY) for his technical support with ASP scripts. 126 a traditional U.S. mail postcard was sent out to notify international students about this upcoming survey. OIR and OISS obtained international students' e-mail addresses from Academic Computing and updated this information from students' individual files in OISS for another notification, this time by e-mail. Once the survey and database were successfully connected with the university web server and the final pilot test was completed, OISS sent a cover letter to all the registered international students for the survey through e-mail. The cover letter message included the survey URL. A reminder was sent out on-line 10 days after the first e-mail. The last follow-up effort was the paper survey mailing to those who did not respond on-line. The responses submitted on-line was inserted into the Access database automatically, which allowed OIR to monitor the returns each day. The data collected via both on-line or paper survey were combined and analyzed after the posted survey deadline. Analysis and Results The survey response rate was 45.9 percent (365 out of 796). 69.9 percent of the respondents filled out their survey on-line. Figure 2 shows the trend of the web survey responses. The follow-up mail survey was responsible for 30.1 percent of the response. This survey effort compared very favorably with previous efforts to study the international student population at the University at Albany. A traditional mail survey for international students in 1996 that obtained 12.7 percent response rate (83 out of 655). Combining web-based survey techniques and traditional follow-up mail survey allowed us to maximize the response rate. The response rate of 45.9 percent, size of the respondent pool and the fact that respondent demographics (e.g., country of origin, gender, level of study, program of study, and age) largely mirrored the international student population, indicates a fairly high degree of confidence in the generalizability of the results. 127 Figure 2. The trend of on-line responses 70 65 60 52 Number of Responses 50 41 40 30 20 14 11 10 10 7 5 5 5 5 4 5 3 1 1 4 1 2 3 2 0 0 0 2 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 1 p 02 r -M a 04 y -M a 06 y -M a 08 y -M a 10 y -M a 12 y -M a 14 y -M a 16 y -M a 18 y -M a 20 y -M a 22 y -M a 24 y -M a 26 y -M a 28 y -M a 30 y -M ay pr -A 30 pr -A 28 -A pr 1st E-mail 26 pr -A 24 pr -A 22 -A 20 18 -A pr 0 Date (April 18 - May 31) Follow-up Email Figure 3 compares the country of origin for the survey respondents with that of the international student population in the Spring 2000 semester. In addition, the 45.9 percent response rate provided enough responses to produce a 95 percent confidence level with 3.78 confidence interval for interpreting the survey results. Figure 3. Demographic characteristics of the survey respondents vs. the international student population 70% 60% 50% Respondent Population 40% 30% 20% 128 South America Cntrl America and Carr North America Country Area Middle East Europe Oceania Asia 0% Africa 10% Overall, respondents were positive about the services provided by OISS. The mean response scores showed that student satisfaction with pre-arrival information, the international student electronic newsgroup, social opportunities (except coffee social hours and the end-of-year party), workshops, and general services offered by OISS fell between satisfied and very satisfied. The friendliness of OISS staff had the highest mean satisfaction score (4.67 on a scale of 1 to 5). The service rated lowest was the effectiveness of the orientation program, which was between neutral and satisfied with a mean of 3.88. The details for major findings in each service area are discussed below. Regarding the responses to the pre-arrival package, 78 percent of the respondents received the pre-arrival package and 55.7 percent of them were satisfied or very satisfied with the package, and another 23 percent expressed a neutral opinion. Little over 100 (30 percent) of the respondents contacted OISS before their arrival and 90 percent of them reported that their pre-arrival questions were answered by OISS. The most frequently used methods of contact were e-mail and telephone. More housing information was most frequently suggested by graduate students and exchange students to improve the prearrival package. In addition, many international students expressed that they were eager to contact continuing students from the same country prior to their arrival. Sixty-three percent of the respondents participated the orientation programs. Asian students had a slightly higher absence rate than European students (37 percent vs. 25 percent). Half (50.4 percent) of orientation participants rated the program as effective or highly effective in helping students adjust to the new environment, and another quarter (24.7 percent) were neutral. Comments on improving the orientation program showed that basic information such as housing, banking, how to get a driver's license, health insurance, social security number, and class registration procedures should be emphasized in the program. Electronic newsgroup listserv is for OISS to broadcast related news and information to international students. 58.6 percent of respondents subscribed to the electronic newsgroup and 81.7 percent of them were positive about the effectiveness of the electronic newsgroup's function of keeping international students informed. A review of student comments suggested messages to be short and focused. Throughout the year, the OISS organized several social activities for international students. Just over half (51.2 percent) of the respondents attended at least one of the social events. Among the events, Thanksgiving dinner had the highest rating (4.46). Respondents were not as satisfied with both the coffee/tea social hours (mean=3.74) and the end-of-year party (mean=3.76) as with Thanksgiving dinner. Many students commented that they would like more intimate contact with OISS staff during social events. According to a crosstabs analysis, the number of undergraduate students who have attended events was lower than the expected number, while the number of exchange international students who had participated in events was higher than the expected 129 number. The expected number is the number of cases expected if the "student group" and "participation of event" are independent of each other. Table 1 shows the observed and expected numbers of participants broken by student degree levels. Table 1. Observed & Expected Number of Participants (Break by Student Degree Level) Participate? Yes No Count/Expected Count Expected Count Expected Undergraduate 23 31 37 28 Graduate 144 141 120 124 Exchange 15 9 3 8 Comparison of the participation rate between the two largest international student groups, Asian (20%) and European (31%), it was found that Asian students were less interested in participating in the events than European students (see Table 2). Table 2: Observed & Expected Number of Participants (Asian & European Students) Participate? Yes No Count/Expected Count Expected Count Expected Asia 108 115 108 101 Europe 59 49 34 43 International student perceptions of OISS services can be characterized as mostly between satisfied and very satisfied. 85 percent of the respondents have visited OISS more than three times. According to a means test, undergraduate students tended to visit OISS less than graduate students did (mean = 4.18 vs. 4.87). There was not much difference between Asian and European students on the frequency of visiting OISS. In addition, the longer the students have stayed, the more often they visited OISS. Apart from general matters, students most frequently requested advisement on tax, immigration and employment matters. Graduate students requested advisement on tax matters more frequently than undergraduate students did, most likely as a result of teaching and research assistantship positions. Undergraduate students sought academic advisement more than graduate students did. Asian students asked for tax-related help from OISS more than expected while the number of European students who consulted tax-related advisement from OISS was less than expected. 74.7 percent of the respondents positively rated the effectiveness of OISS to inform international students about federal regulations, and 18.4 percent had a neutral rating. Regarding the effectiveness of workshops, 88.8 percent of 167 workshop participants were positive about them. Student satisfaction ratings of services provided by OISS staff were all between satisfied and very satisfied. Respondents were very positive about the OISS staff's friendliness, phone courtesy, usefulness of advisor information, advisor 130 accessibility, sensitivity, effectiveness in solving problems, and in the time provided for discussion. In addition, respondents had a high degree of confidence in the information provided by OISS staff. Discussion The specific results of this investigation indicated that the majority of international students at University at Albany were satisfied with the services provided by OISS. At the same time, certain needs and concerns have been raised by many respondents which might not have otherwise been recognized (e.g., more comprehensive housing information, establishing a way to contact students from same country, modifications on orientation program). As a result of this study, OISS is developing a webpage which will contain useful and important information in more depth. Since the majority of international students used email as the main tool to communicate with OISS prior to their departure, we expect that the webpage will be one of the critical mechanisms for OISS to disseminate information more thoroughly and effectively. Additionally, the webpage will show the results of this current study as a way of introducing our services to the public and our clients. There are several issues which are worthy of further investigation. For example, we need to learn more about what types of social events/programs appeal to students from Asia. Thus, it is our intention to conduct frequent mini-surveys or interviews on specific topics to identify particular needs according to different groupings of students. Moreover, this collaboration between the two offices contributed to the successful study and also raised more awareness of international students on campus. We envision continuing our synergistic efforts to promote quality services to the international population through on-going assessment. As for web-based survey techniques, looking into the whole survey process, an online survey did raise the response rate compared with the traditional mail survey conducted in 1996. After the experience of this web-based survey, several pros and cons were discovered. We found the on-line survey very accessible. Respondents could reach the survey with one click via e-mail as long as the e-mail recipient has their internet browser activated, which we believe most people do. In addition, a more interactively designed on-line survey can be easier and more fun for respondents to fill out than a paper survey. In this international student survey, respondents obtained contact information if they had trouble accessing the main survey with their student ID number. The connection between the on-line survey and response database allowed the data to be input automatically right after the respondents pressed the "submit" button. This mechanism minimizes the human errors that could be caused by manual data insertion. In addition, the on-line survey eliminates or minimizes the time needed for data insertion and survey mailing. Accordingly, administrative costs associated with data insertion and postage could be reduced. Lastly, the survey manager can monitor the survey returns and 131 have the most up-to-date data anytime. Through Access form and report design, data manager could even have running survey summaries available anytime upon request. One of the disadvantages of web-based surveys is the lack of flexibility it offers those survey targets that prefer anonymity. The ID validation mechanism may cause these people to refuse to participate. We do not see this issue as a major problem at the University at Albany since our students have historically been very willing to provide student ID numbers on surveys. In addition, web-based surveys are not viewed kindly by people who are not familiar with e-mail or internet technology. Another important consideration is that invalid e-mail addresses can become a critical issue in a web-based survey administration if potential respondents will be notified of this survey via e-mail. In this present study, approximate 70 percent of the response came from the on-line survey. We could have concluded data collection at this point with an acceptable confidence level. However, a follow-up mail survey was employed to include the opinions from those survey targets that were potentially not familiar with internet technology or missed the e-mail notification, or who just simply had not responded yet. As mentioned above, the follow-up paper survey did yield 30 percent of the responses. Therefore, if a maximized response rate is a critical issue for a survey, combining webbased survey with a traditional paper follow-up survey is a strategy we recommend. The URL of the on-line survey could also be included on the follow-up paper survey so that survey targets can choose their preference to fill out the survey. The web-based survey should not be viewed as a response rate "panacea" for any type of survey because not everybody appears to be familiar with and willing to use internet technology. However, a web-based survey could facilitate higher response rates in surveys targeted on college students because, as mentioned above, approximately 90 percent of the college students in North America have ready internet access. However, when international students become the survey targets, one should consider whether or not their international students are familiar and comfortable with the internet technology. We found international students at the University at Albany to be most amenable to this approach. Post-survey communication between the survey host and the respondents is an important yet easily ignored stage in the need assessment process. It is crucial to inform the respondents that their opinions did matter. Therefore, in order to reinforce the connection and relationship between college students and the university service units, letting the respondents know how the service units are using the survey responses to improve service should be considered part of the survey process. 132 References Davis, T. M. (Ed.). (1998). Open doors 1998-99: Report on international educational exchange. New York, NY: Institute of International Education. Eid, M. T. & Jordan-Domschot, T. (1989). Needs assessment of international students at Eastern Oregon State College. (ERIC Document Reproduction Service No. ED 326 098). Fraenkel, J. R. & Wallen, N. E. (1996). How to design and evaluate research in education (3rd ed.). New York: McGraw-Hill. Hammer, M. R. (1992). Research, mission statements, and international student advising offices. International Journal of Intercultural Relations, 16, 217-236. Harris, S. M. (1995). Cultural concerns in the assessment of nonwhite students' needs. In S. D. Stabb, S. M. Harris, & J. E. Talley, (Eds.), Multicultural needs assessment for college and university student populations (pp. 17-49). Springfield, IL: Charles C Thomas. Johnson, K. A. (1993). Q-methodology: Perceptions of international student services in higher education. Atlanta, GA: American Educational Research Association. (ERIC Document Reproduction Service No. ED 363 550). Kelly, H. A. (1999). The development of a web-based survey: survey design to data analysis. 26th Annual North East Association for Institutional Research Conference. Lee, M. Y., Abd-Ella, M., & Burks, L. A. (1981). Needs of foreign students from developing nations at U.S. colleges and universities. Washington, DC: NAFSA. Luzzo, D. A., Henao, C., & Wilson, M. (1996). An innovative approach to assessing the academic and social needs of international students. Journal of College Student Development, 37(3), 351-352. Lynch, J. F. & Wortman, T. I. (1999). Do you know what you're talking about: Practical uses of research on international students needs. Presented at NAFSA: Association of International Educators 52nd Annual Conference (San Diego, CA). Office of International Education. (2000). Fall 2000 international student enrollment profile. University at Albany, NY: The Author. Palladino, M. (1999). A step by step guide to building a web-based survey. Presented at 26th Annual North East Association for Institutional research Conference. 133 Parrott, S. & McKnight, J. (1998) They'll surf but they won't swim: Student reluctance to apply to college online and implications for web-based survey research. 25th Annual North East Association for Institutional Research Conference. Prieto, S. L. (1995). International student populations and needs assessment. In S. D. Stabb, S. M. Harris, & J. E. Talley, (Eds.), Multicultural needs assessment for college and university student populations (pp. 203-223). Springfield, IL: Charles C Thomas. Selvadurai, R. H. (1991). Adequacy of selected services to international students in an urban technical college. The Urban Reviews, 33(4), 271-285. Stabb, S. D. (1995). Needs assessment methodology. In S. D. Stabb, S. M. Harris, & J. E. Talley, (Eds.), Multicultural needs assessment for college and university student populations (pp. 51-115). Springfield, IL: Charles C Thomas. Terkla, D. G. & McKnight, J. (1998) On-line news vs. traditional media: Student preference regarding the acquisition of current events. 25th Annual North East Association for Institutional Research Conference Proceedings. University at Albany's Strategic Planning Committee. (1998). Charting the future: Creating a new learning environment for the 21st century [Online]. Available: http://www.albany.edu/pr/planning/goals.html [1998, September 25]. 134 DEVELOPING AN ANALYSIS OF OUTCOMES FOR THE WRITING PROFICIENCY REQUIREMENT Kevin B. Murphy Institutional Research Analyst Office of Institutional Research and Policy Studies University of Massachusetts Boston Introduction Based on a user request, we have been involved with an ongoing analysis of the University of Massachusetts Boston Writing Proficiency Requirement (WPR). The University of Massachusetts Boston (UMB) is a public urban university with an extremely diverse student population that includes a high proportion of non-traditional students. The majority of our students enter as transfer students. The requirement consists of the successful completion of a timed essay examination, or the submission of a portfolio of work which includes several examples of papers written for courses and a new paper based on assigned readings and specific questions. It is designed to “assist students in acquiring critical skills. Foremost among these is the ability to present ideas clearly, correctly, and persuasively in English prose” (UMB Undergraduate Catalog). The requirement must be successfully completed as a prerequisite for graduation from the College of Arts and Sciences (CAS) and from the College of Nursing (CN). It is a high stakes requirement. There is no alternative path to graduation. Waivers are only granted to those who hold a bachelor’s degree from another institution and are entering UMB to acquire another bachelor’s degree. A number of courses have been designed to aid in the development of the critical skills needed for successful completion of the requirement. These are called Core or “C” courses. While these courses are offered in a number of disciplines throughout the CAS, they are generally overseen by the Core Curriculum Office which is also responsible for the administration of the WPR. There are also several courses specifically designed to prepare students who anticipate difficulty, or who have had difficulty meeting the requirement. It is these courses, two sequences of English composition courses, and a set of ESL courses that were the basis of the original research request. The original request that was made by the CAS Writing Proficiency Requirement Committee was basically a question that focused on the curriculum and its connection to success on the WPR. Re-formulating the Question This question assumes a view of the WPR as an event. The event has an outcome; a result of Pass or Retake. The question is about how another event, taking specific courses or the curriculum event, relates to the outcome of the WPR event. It was the wrong question. 135 The Writing Proficiency Requirement should be viewed as a process that begins before a single course is ever taken at UMB, rather than as an event. The better research question was about how the entire process (which includes the curriculum) contributes to success on the WPR. In order to analyze the outcomes of the WPR, we needed to first understand the entire process, and to identify stakeholders in other parts of the process that were beyond the focus of the Core Curriculum Office or the WPR Committee. This required a number of interviews and consultations that began in the Core Curriculum Office and branched out from there. There are well-established rules for the process. It was fairly easy to identify how the process works, or, at least, how it is supposed to work. The Process in Theory 1) New students attend orientation and take the UMB English Placement Assessment (EPA) which is evaluated through the Freshman English Office. The ESL Program then further evaluates those with an ESL recommendation. 2) The recommendations are entered into the computer system. 3) University Advising (UA) accesses the EPA results, and the students are directed to the appropriate English courses. 4) The students complete the recommended courses. 5) All students complete the English Composition 101 and 102 sequence either at UMB, or bring it in as transfer credit. There is a UMB sequence of English Composition 101E and 102E that fulfills this requirement, and is specifically designed for non-native English speakers. 6) Students who enter with fewer than 30 transfer credits must complete three 100 level and two 200 level •C• courses in various departments. These courses are designed to focus on the reasoning and writing skills that are assessed by the WPR. Transfer students with 30 or more credits on entry are exempt from this requirement, but may also take these courses. 7) The students attempt the WPR around the time they have accumulated 60 credits either by transfer or at UMB. They may do so by choosing either the Examination or the Portfolio option. 8) Those who pass have no further requirement. 9) Those who are required to retake the requirement enroll in NU250 if they are College of Nursing students. CAS students enroll in CRW Z282 if they intend to retake the requirement using the exam option, or CRW Z283 if they intend to retake the requirement using the portfolio option. Guidance is offered by the Core Curriculum Office which directs the WPR. 136 10) Additional tutoring and other forms of support are available through the Core Curriculum Office for those students who continue to have difficulty meeting the requirement until they have done so. Students continue to attempt the requirement until they receive a grade of Pass. The Process in Operation While it is necessary to identify how the process actually works, it is also necessary for the institutional researcher to identify the data that are available for each part of the process. We need to know about not only what is collected, but by whom, where it is stored, and how we may access it. This is important because it may be that the data that are needed to analyze the question are not currently available, and it should be a function of Institutional Research (IR) not only to recognize that, but also to identify ways to ensure that it becomes available for a future analysis. Therefore, as I describe how the process actually works, I’ll also discuss the data that are available at each step. On average, only about 85% of incoming students have attended orientation over the past ten years. The fall semester tends to have a higher attendance rate than the spring semester. University Advising keeps this information on a PC in its office. IR had no direct access to data concerning individual students. We only received the raw numbers to run against the admissions figures we maintain. The English Placement Assessment is much more important. The EPA is a holistic writing placement assessment. Students read several short passages, and write several paragraphs in answer to several questions. It is administered through the testing center at UA. It is evaluated by English Department faculty under the supervision of the Freshman English Program (FEP). The “score” is a recommendation for the student to take a particular course or sequence of courses in English. Until about three years ago, UA kept the results of the EPA on its own PC. At that time, University Information Systems (UIS) began keeping the data on a permanent student file. UIS is based on the UMass system’s main campus in Amherst MA, about ninety miles from our campus. Many of the systems they oversee are slated to be phased out in the next several years as the University system converts its management systems from its existing mainframe environment. Resources for the existing system are being diverted to create the new system. When UIS took over the EPA information, a miscommunication occurred so that three semesters’ worth of data were entered by UA staff without dates. They were under the impression that the system would insert the date. This currently makes the data impossible to locate by date, so it is difficult to identify the rates at which students took the EPA by semester. The information is eventually retrievable. Initially, it looked as though less than 5% of our students had taken the EPA over a period of several semesters. However, we were able to work backward from the group of students who had attempted the WPR, and by matching the records by student id number, get the EPA results. We found that over the past ten years, we had test results for only about 65% of the students. Several semesters had such a low rate that it seems certain that the data were somehow lost. Therefore, the analysis 137 will have to be focused on those semesters for which we have confidence in the data. We also found that a group that comprises about 10% of our fall admissions enters through a special program called Directions for Student Potential (DSP) that exempts them from the regular EPA. They are assessed through a different system, and the results are kept by their program on a separate system that is not readily available to IR. When the EPA has been evaluated by the English faculty, the student returns to University Advising, and the advisor reports the recommendation. However, when the English faculty makes the recommendations, they are partially based upon data selfreported by the students about their previous English experience. For example, a student who has already completed and received credit for the basic composition sequence but who needs additional work, would receive a recommendation for a course (ENG Z281) specifically designed for such students. When the adviser meets with the student, the student’s transcript should have been evaluated. At that point, the adviser may change the recommendation. If the example student did not actually receive prior composition credit, the adviser might change the recommendation to ENG 101 or ENG 102. The change in the recommendation is not collected anywhere. This makes it difficult to determine whether the student has complied with the recommendation. This is important because compliance is not mandatory. We can only determine the value of this step of the process if we know which students utilized it. Because neither completion of the EPA nor compliance with the recommendation is mandatory, a number of students self-register for courses. The Freshman English Program has developed a shadow system to deal with this. On the first day of all of the English composition classes, the instructor administers a mini-EPA. All of the students are asked to read a short passage and to write a response to several short questions. The Director of the FEP assembles a small task force from among the English Dept. faculty to assess this informal instrument by the beginning of the next class session. It is used to provide a placement recommendation for those students who avoided the formal EPA, and to confirm proper placement for those who completed it. No documentation of any kind exists for this system. No data are gathered on the results. As with other recommendations, this recommendation is not binding, and the student may insist on remaining in the course for which s/he registered. It is likely that this part of the process has a significant impact, because a number of students eventually register for specialized classes for which they have had no formal recommendation. The ESL staff from Academic Support Services assists in evaluating both the formal EPA and the informal mini-EPA. They also conduct assessments and work closely with non-native English speakers who have self-identified or been referred to their office at any time. They have the results of assessments they complete outside of the formal EPA process. However, this information is stored in a database in their office and is not readily available to IR. The “C” course requirement is fairly straightforward by rule. The students who enter with fewer than thirty (30) credits must complete it. However, they don’t necessarily 138 have to complete it before they attempt the WPR. In practice, most students do not complete the five courses before attempting the WPR for the first time. Because of the credit cutoff, and the numbers of transfer students we enroll, it is difficult to easily identify all of the students who are subject to the requirement. However, among our own first time freshmen who are all subject to the rule, less than 30% completed the five courses. In fact, over 35% had completed only two or fewer of the courses. If the student successfully completes the WPR, s/he will often attempt to get the rest of the courses waived. Because the number of these courses is limited, and they are designed to prepare the student for the WPR, the waiver is often granted. Other students attempt to receive a composition waiver by attempting the WPR before completing the English Composition sequence. This waiver is almost never granted. All course data is stored with, and controlled by, UIS. There are several courses offered that are generally understood to be for students who have not passed the WPR on a first or subsequent attempt. However, these courses are sometimes taken before the first WPR attempt by students who anticipate extreme difficulty. Data on these are available on the course file. For students who have attempted the WPR several times without passing, individual tutoring is offered. This is coordinated through the Core Curriculum Office in concert with Academic Support Services. Information concerning tutoring is kept on a separate system by Academic Support, and it is not available to IR. Other Issues Prior to the June 1996 WPR exam, a single WPR record was kept for each student. This record held data regarding the student’s most recent attempt. For those with multiple attempts, no information was available about previous attempts. This meant that we could not analyze students’ behavior between attempts, because we didn’t know when the previous attempt occurred. After an earlier attempt to analyze outcomes on the WPR, the system was changed to accommodate records for multiple attempts. UIS changed the input programs. For, this reason, our analysis was to include only those students who attempted the WPR for the first time in June 1996 or later. When we first began to access data on the WPR last spring, we noticed that we had more than one record listed as the first attempt for a number of students. There is a field called “noattmpt” that should but does not always identify the number of the attempt. The true identifying field is called “examtype”. However, we found that even when the two fields agreed that it was a first record, we occasionally had a previous record for the student. In order to identify the correct members of our group, we eventually settled for a set of conditions. If both of the fields agreed that it was first attempt and it was the first record we had for the student, we selected the record for the first attempters data set. Once we had our initial data set, we also noticed unexpected values in several fields on a number of records. The Core Curriculum Office is responsible for data entry for the WPR results. I contacted the clerk who normally enters the data to ask for a key to the values. There isn’t a written one. She was taught how to enter the results by the person 139 who had the job before her. She thought that the unexpected values were probably entered by a temporary worker while she was on leave. The values entered in some of the fields were valid, but for other fields. No documentation exists on this campus. Similarly, when we accessed the EPA data, the results field seemed to be filled with garbage characters. The Testing Center finally provided an old sheet of the codes they had been using when they controlled the data, with additional UIS codes penciled in. The new codes consisted of punctuation marks. No other documentation exists on our campus. Implications for IR The Stakeholders The WPR Committee and the Core Curriculum Office initiated the study. While they recognize that University Advising, the Freshman English Program, and Academic Support Services with the ESL Office all play a part in the WPR system, they tend to view the WPR as an event or pairing of events. Students take courses. Students attempt the WPR. The other stakeholders are sometimes viewed as adversaries rather than as compatriots. For example, the WPR Director suspects that a number of advisers don’t pass along the EPA recommendations to their students because they don’t believe in their value. The other stakeholders occasionally share a suspicious view of the Core Curriculum Office and the entire WPR process. Recently, we held a meeting about the progress of the study. It was called by the Director of the WPR. Nobody from any of the other stakeholders’ groups was invited. In instances like these, it may be that IR can bridge the gap between stakeholders. They may be suspicious of each other. They may in fact have conflicting goals. This sometimes makes it difficult for them to communicate effectively with each other. I found that each of the people I interviewed in the various departments had a very good picture of the idiosyncrasies of their particular part of the process, and was happy to talk to me about it. However, each also assumed that they knew how the other parts of the process worked, because the other parts would, of course, work according to the existing rules. At one point, I asked the Director of Freshman English how many of our students took their English composition courses at other schools. She replied that there shouldn’t be any because it would be against the rules. The rules require prior permission for our students to be able to take off campus courses and then to transfer them into UMB. The Registrar’s Office here acknowledged that that was the rule, but that it was in place in order to deal with unusual courses. Basic English composition would be readily accepted from any other accredited institution. Rules are made to be broken. It can be the job of IR to analyze quite a number of the processes on campus. We have to learn the rules and how they are applied. That can put us in an excellent position to facilitate communication and understanding if we are trusted by the various stakeholders. They have to trust that we will tell their parts of the story as accurately as we can. 140 The Data Our office does not control or maintain most of our data. As I’ve noted, some of it is kept in informal shadow systems on PCs in offices around the campus. Most of it resides with UIS in Amherst. UIS is particularly short of resources, and has many other clients and demands for its attention. Two of the major data problems that occurred happened when UIS changed or took over an existing system. Our office was aware of at least one of those changes. We should have run at least a small-scale test on the data sets shortly after the systems were changed. We might have found the attempt numbering problem in the WPR results data set before four years of data were entered, and perhaps identified the missing date values on the EPA before three semesters had to be corrected. However, we didn’t. In the future, we should. We are responsible for the integrity of the data we report. Often, we are responsible for the extra work to correct the problems with the data. The data can also give us information about the process. The very small numbers of ESL recommendations in the EPA file led me to suspect that ESL students were being evaluated differently. It was so. They receive a different code on their EPA records that was not shown on the coding sheet that I was given by the Testing Center. Their records have now been accessed. The same was true of the DSP students. I only learned that they were exempt from the regular process when I found that they had very few records in the EPA file. I was then able to ask the right questions of the responsible people. Communicating What Is Possible and What Is Meaningful We recently participated in a meeting with members of the WPR Committee and several other interested parties. I found that a number of them had unrealistic expectations. One professor wants correlations for the various courses and success on the WPR. That is quite possible. The answer is that completing the courses designed for people who enter needing extra work on their English skills is negatively correlated with success on the WPR. However, it’s also a meaningless answer. The proper question is how much taking such a course changes the probability of success for a student who needs to develop those skills. This is why the EPA recommendations are so important. It establishes a baseline for our analysis. Communicating this is difficult, but it is necessary. We need to be able to communicate this need for a baseline in order to persuade people to do the extra work to capture data for us. The informal mini-EPA is a good example. It is an outstanding system. The extra work done by the English faculty to assure that each student who is taking a composition course is in an appropriate class is remarkable. Probably the last thing they want do is the additional work of formally tracking those recommendations. We also need UA to perform the extra work of entering any changes in the EPA recommendations that they make. Without that extra work, we can never assess just how valuable their efforts might be. 141 Conclusion The pass rate on the initial attempt at the WPR is about 80%. The rate on subsequent attempts is about the same. Eventually, sometimes with a great deal of support, almost everyone succeeds. The challenge is to develop an assessment of the process given the process as it actually operates. Enough data probably exists in usable form to produce reasonable results now. We can probably even answer the question that was originally asked. However, we need to prepare to do a better job in the future. Part of that is helping to formulate the question so that the answer will be meaningful. 142 ADULT EDUCATION IN THE 1990S: AN ANALYSIS OF THE 1995 NATIONAL HOUSEHOLD EDUCATION SURVEY DATABASE Mitchell S. Nesler Director of Research, Academic Programs Regents College Roy Gunnarsson20 University at Albany, State University of New York Abstract The research described in this paper21 consists of a detailed analysis of the 1995 National Household Education Survey (NHES:95) Adult Education Component in light of the findings and recommendations of the Commission for a Nation of Lifelong Learners (CNLL). Findings in this paper include variations in self-reported barriers and motivations for participation by socioeconomic status, age, gender, ethnicity, industry of employment, and types of courses taken. Demographic differences were also found between those who participate in credential programs, personal development courses, and work-related courses. Adult Education in the 1990s Adult education has often been described as being on the fringe of the higher education landscape (Maehl, 2000). The vast majority of educational institutions seem to focus their attention on recruiting and retaining traditional-aged students, despite the fact that between 1985 and 1995 the number of adult students enrolling in higher education grew more rapidly than did the number of traditional-aged students (Snyder, Hoffman, & Geddes, 1998). In the 1990s, the Commission for a Nation of Lifelong Learners (CNLL) was assembled with a grant from the W. K. Kellogg Foundation. CNLL developed a series of recommendations, implementation strategies, and policy implications based on its findings. The Commission recommended that there be broad acknowledgement of the link between universal lifelong learning and America's position in the global economy, that access to lifelong learning resources be made equitable, that new technologies be effectively used to deliver adult education, that there be a reorganization of the delivery of adult education, and that adult education and lifelong learning be given funding in proportion to their significance for America’s future. 20 21 Roy Gunnarsson is presently employed at Regents College. For a copy of the full version of this paper, please e-mail the first author at mnesler@regents.edu 143 Motivations and Barriers in Adult Education One line of research on motivations has examined the influence of demographic characteristics on participation in adult education. Fujita-Starck (1996) further suggested that demographic characteristics alone are not sufficient to identify motivations. Instead, she suggested that adult learners be grouped by curricula (personal development, professional enhancement, and the arts). Fujita-Starck found that those enrolled in professional enhancement courses had professional motivations, while adults enrolled in personal development courses were motivated by improving communications skills. Those enrolled in arts courses were motivated by the desire for social contact. However, Scanlan & Darkenwald (1990) concluded that research into motivational factors alone has not been sufficient to distinguish between adult education participants and nonparticipants. Motivational concerns can be interrelated to the logistical problems and situations that occur in adult life. Recent research has measured both perceptions of barriers and motivations for participating in adult education. For example, in comparing participants and nonparticipants in continuing education courses, Henry and Basile (1994) found that major changes in the person’s life created barriers to enrollment. Cost was also cited as a major deterrent by nonparticipants. It should be noted, however that the term “barrier,” referring to some absolute blockage, is being replaced in the adult education literature by the term “deterrent.” The latter term reflects something more dynamic that is working in combination with other forces (Valentine and Darkenwald, 1990 as cited in Silva, Cahalan, & Lacireno-Paquet, 1998). The Current Study Several papers have been generated using the NHES:95 data (Bills, 1998a, 1998b, 1998c, 1999; Hollenbeck, 1999; Kim, Collins, & McArthur, 1997; Kim, Collins, & Stowe, 1997a, 1997b; Kim, Collins, Stowe, & Chandler, 1995; McArthur, 1998), as well as technical guides to using the data (Brick & Broene, 1997; Collins, Brick, Kim, & Gilmore, 1996; Collins & Chandler, 1996; Nolin, Collins, & Brick 1997). The current research was designed to explore previously unanswered questions using the NHES:95 data, primarily in light of the work of the Commission for a Nation of Lifelong Learners and the research on participation in adult education programs. The questions addressed include: 1) Do the self-reported barriers and motivations for participation vary by demographic characteristics such as socioeconomic status, age, gender, ethnicity, industry of employment, in addition to the types of courses taken? 2) What are the major demographic differences among those who participate in credential programs, personal development courses, and work-related courses? 144 3) Who are the major providers of adult education? Are there demographic differences in who is attracted to different providers of adult education? Method Sample A total of 19,722 individuals completed the adult education component of NHES:95. Of these, 11,713 were AE participants and 8,009 were non-participants. In order to provide accurate information for important subgroups of the population, oversampling was used for subgroups. Analysis In the present analyses, the WesVarPC software was used to produce weighted population estimates, standard errors, and subsequently for statistical tests. WesVarPC uses a replication method to estimate standard errors (Brick, Broene, James, & Severynse, 1997). Crosstabulation cells with frequencies lower than 30 were not included in the analyses, except as where noted. Results All percentages reported are within-group percentages for each particular category. All reported differences are statistically significant at the p<.05 or p<.01 level. The Bonferroni correction for familywise error rates was applied to all multiple comparisons. Barriers to work-related courses The greatest barrier reported to taking work-related courses was time (46.9%) followed by money and costs (29.7%). There was substantial variation in reporting each different barrier, however. A great deal of the variance in the different barriers was accounted for by age, gender, and income. Age. Younger individuals were less likely than older individuals to report time as the main barrier to work-related courses. For example, both individuals aged 16 through 24 and individuals aged 25 through 34 were less likely (40.1% and 41.3%, respectively) than individuals aged 35 through 44 (52.6%) to report this barrier. However, the two younger (16-34) age groups and the older (35+) age groups did not differ appreciably among themselves in reporting this barrier. Whereas younger people tended to be less likely than older age groups to report time as the main obstacle, they were more likely report costs as the main barrier to work-related courses. Again, both individuals between 16 and 24 years of age and individuals between 25 and 34 were more likely (37.8% and 34.9%, respectively) 145 than the 35 through 44 age group (24.9%) to report money and costs as the main barrier. Gender. Gender differences were found in three barriers to work-related courses. First, men were found to be more likely (54.2%) than women (40.7%) to report time as the main barrier. Second, women were more likely (32.3%) than men (26.6%) to report cost as the main barrier. Third, women were more likely (11.3%) than men22 (2.5%) to report child care as the main barrier. Ethnicity. There were a few ethnicity differences in reporting barriers. African Americans were least likely (33.0%) to report time as the main barrier to taking workrelated courses. Both Caucasians (49.5%) and ‘Other’ ethnicities (50.4%) were statistically more likely than African Americans to report this. Socioeconomic status. The general trend was that individuals with higher income were more likely to report time as the main barrier. Thus, individuals in the highest income category were more likely (59.6%) than individuals in the middle category (45.6%) to report this. Respondents with incomes between $20,000 and $40,000 were in turn more likely than individuals with incomes below $20,000 (29.0%) to report time as the main barrier. The pattern for reporting cost as the main barrier to workrelated courses was opposite that of reporting time as the main barrier. Individuals with annual household incomes below $20,000 were more likely (43.3%) than individuals with incomes between $20,000 and $40,000 (31.5%) to report money and costs as the main barrier. Individuals in the $20,000-$40,000 income group were in turn more likely than individuals with annual incomes above $40,000 (19.5%) to report cost as the main barrier. Motivations for Participation Statistics on motivations for participation are reported in Table 1. Credential courses The main reasons for taking credential courses was to train for a new job or career followed by improving, keeping up, or advancing in one’s current job. Age. Younger individuals were less likely than older individuals to take credential courses to improve in a current job but were instead more likely to take such courses to train for a new job or career. Socioeconomic status. Individuals with lower household incomes were less likely than individuals with higher incomes to take credential courses to improve in their 22 The unweighted cell frequency for men in this test was 29, one less than the suggested inclusion frequency of 30. 146 current jobs. This pattern was reversed when taking courses to train for a new job or career. Lower income individuals were more likely than higher income individuals to take credential courses to train for a new job. Further, middle income individuals were more likely than high income individuals to take credential courses to train for a new job. Personal development courses Gender. Men were more likely than women to take courses in this category to improve in one’s job. Women, on the other hand, were more likely than men to take such courses for personal, family or social reasons. Work-related courses Age. The youngest age group differed from all the older age groups (except for retirement age) in their reasons for taking work-related courses. Individuals ages 1624 were less likely than individuals ages 25-34, 35-44, 45-54, and 55-64 to take such courses to improve, keep up, or advance in their current jobs. Individuals in the youngest age group were more likely than individuals ages 25-34, 35-44, and 45-54 to take the courses to train for a new career. Socioeconomic status. The last important source of variation in reasons for taking work-related courses was income. Overall, the higher an individuals income, the more likely was that individual to take work-related courses to improve, keep up, or advance in his or her current job. Conversely, high-income individuals were less likely than middle income and low income individuals to take work-related courses to train for a new job or career. No statistical difference was found between low and middle income individuals after correction for familywise error rate. Demographic Profiles of AE Participants The demographic data are here reported within each demographic variable in order to highlight the differences between the different types of courses. See table 2 for actual percentages. Age. The majority of participants in credential courses are young. Participation in credential courses seems to decrease rapidly with age. The age distribution is more even for personal development and work-related courses. Gender. More women than men were enrolled in credential courses. The same trend is present and more pronounced in personal development courses. However, participants in work-related courses seem to be slightly more likely to be men than women. 147 Ethnicity. The largest ethnicity constituency for all three course types was Caucasian. Based on the population constituencies, Caucasians seem overrepresented except for credential courses. African Americans seem underrepresented in workrelated courses. Hispanic students seem underrepresented in all three types of courses, but especially in work-related courses. Educational attainment. Almost half (48.7%) of the adult population belong to the lowest two educational categories. These individuals have not had any formal education beyond high school. This large group is underrepresented in all three types of adult education. On the other hand, individuals with college degrees (Associates degree, Bachelors degree, or Postbaccalaureate degree) were overrepresented in all three types of adult education. Socioeconomic status. Participants in credential courses were proportionally distributed (compared to population constituencies) across household income categories. However, participation in personal development courses and in work related courses was less proportionally distributed. In both cases, lower income individuals tended to be underrepresented and higher income individuals tended to be overrepresented. Provider Statistics Credential courses Postsecondary institutions provided most (90.1%) of the credential courses taken by the survey respondents. Because of the high degree of uniformity, no statistical tests were performed for providers of credential courses. Personal development courses There was a great deal of variability in providers of other structured courses. The most common providers of these courses were churches or other religious organizations (28.5%), private or community organizations (8.5%), tutors or private instructors (5.3%), or some other organizations (2.0%). These providers were aggregated into the ‘Miscellaneous’ category, which subsequently accounted for 44.3% of these courses. Other main providers of these courses were postsecondary institutions and business or industry (20.3% and 18.8%, respectively). However, there were a few exceptions to this overall distribution of providers. Age. For the youngest age group (16 through 24), the most common provider of personal development courses was postsecondary institutions (42.9%). Miscellaneous providers and business or industry were listed as the provider by 30.7% and 10.8%, respectively, by these respondents. 148 Work-related courses Over half (51.9%) of the work-related courses were provided by business and industry and nearly a fourth (24.2%) by postsecondary institutions. However, a few demographic differences were found. Gender. Overall, men were more likely (58.0%) than women (46.0%) to take work-related courses from business or industry providers (t=7.35, p<.01). Ethnicity. There also appeared to be ethnicity differences in business or industry as the provider. However, the apparent ethnicity difference was found to be qualified by gender. That is, an ethnicity by gender interaction was found. For Caucasian, African American, and Hispanic men, there were no differences in reporting business or industry as the provider for work-related courses (58.6%, 57.0%, and 55.9%, respectively). Caucasian women were more likely (48.2%) than both African American women (34.5%; t=3.85, p<.01) and Hispanic women (37.1%; t=2.47, p<.05) to report business and industry as the provider. There was no statistically or practically significant difference between African American and Hispanic women. Discussion The NHES:95 data includes questions about perceived barriers adults faced, but these questions were only posed to individuals who had an interest in a work-related course and knew of such a course they wanted to take but could not. While there is obvious logic in this approach to asking questions about barriers, it may underrepresent the actual barriers individuals face, in particular, barriers or deterrents to taking credential and other types of courses. Individuals who had less specific knowledge about courses they wished to take would also be excluded from the data collection, as only those people who knew of a course they wanted to take were asked to describe the barriers to participation. The most important barriers to adult education are time and cost. Time seems to be the greatest barrier, especially to older workers, men, and higher income individuals. For younger workers, time and cost are equally deterring to adult education, and for individuals with lower incomes, cost is the most deterring factor to adult education. Unfortunately, because barrier information was only collected for work-related courses, no comparison between types of courses can be made. The type of courses was, however, related to the reason individuals reported for taking a particular course. In terms of motivations, research has shown that age and gender covary with motivations to participate in adult education (e.g., Morstain & Smart, 1974). In addition, motivations have also been found to vary by the types of courses people take (FujitaStarck, 1996). The data analyzed here largely support these findings. Motivations were found to vary by course type (personal development, credential, or work-related courses), as well as by demographic characteristics. 149 The two overall most important reasons to take credential courses were both jobrelated. Most important was training for a new job or career, followed by improving, keeping up, or advancing in one’s current job. Some individuals were more likely than others to take credential courses to train for a new job. Not surprisingly, younger individuals and individuals with lower incomes were more likely to seek out new careers by taking credential courses, perhaps in order to increase their earning power. Older individuals and individuals with higher incomes, on the other hand, are more likely to seek to improve in their current jobs. Perhaps these individuals are satisfied with their career choices and simply seek to advance within their careers. One interesting finding was that the two reasons, training for a new career and improving in one’s current job were reversed in importance for individuals belonging to an ethnic minority. That is, for minority members, the most important reason for taking credential courses was to improve in one’s current job, followed by training for a new job or career. The most common reason for participation in personal development courses was for personal, family, or social reasons. However, a significant minority of individuals were taking these courses for work-related reasons. As would be expected, these individuals were more likely to be employed than otherwise. They were also more likely to be men rather than women. Most individuals take work-related courses to improve, keep up, or advance in their current jobs. This reason was the most common even for unemployed individuals who sought to improve in their previous fields of employment. Demographic characteristics and differences were also assessed for the different course types. Most noteworthy were differences in age, ethnicity, and educational level. Participants in credential courses tended to be younger than participants in other types of courses; almost half of the credential students were younger than 25 years. On the other hand, participants in personal development courses seemed more evenly distributed across age groups. Participants in work-related courses were mostly of mid-career age with very few young and old participants. Ethnicity was another source of differences in course constituencies. For all types of courses, Caucasians made up the largest ethnic group. However, when compared to population constituencies, it was found that Caucasians were slightly overrepresented in personal development courses and work-related courses, but not in credential courses. African Americans were slightly overrepresented in credential courses but underrepresented in personal development and work-related courses. Individuals of Hispanic origin were underrepresented in all three types of courses. Educational attainment differed widely between participants and non-participants. Almost half the adult U.S. population was exceedingly underrepresented in adult education. These were individuals with no more than a high school diploma. On the other hand, individuals with college degrees were greatly overrepresented in adult education. The differences in educational attainment between participants and non-participants also 150 varied with the type of courses. The overrepresentation of individuals with at least some college education would be expected for participation in credential courses. After all, higher education is a sequential process where students must attain one degree before continuing to the next. However, college-educated individuals were most overrepresented in work-related courses and least overrepresented in credential courses. The different course types also differed widely in who provided them. Credential courses were almost exclusively provided by postsecondary institutions. Personal development courses, on the other hand, were provided by a wide range of organizations. Most commonly, personal development courses were provided by churches and religious organizations. However, postsecondary institutions, business or industry, and private or community organizations also provided a significant portion of these courses. There were few trends in the types of providers of personal development courses. Most notably, for the youngest age group, 16-24 years, postsecondary institutions were the most common type of provider of personal development courses. About half of the work-related courses were provided by business or industry. Another fourth of the courses was provided by postsecondary institutions. One issue of interest regarding work-related courses is whether access to the courses is equitable. This issue is especially important when the courses are provided by business or industry since such courses are often mandated and/or sponsored by the employer. Our analysis shows that, overall, men are more likely to participate in work-related courses provided by business or industry than are women. Further, among women, there was a relatively large gap in participation rates between Caucasians and members of minority groups. Among men, however, there were no ethnicity differences in participation rates for work-related courses provided by business and industry. The data reported here serve to both expand our understanding of some of the issues surrounding participation in adult education and serves to confirm findings from the literature on adult education. One of the important recommendations suggested by the Commission for a Nation of Lifelong Learners was that equity of access to adult education be achieved. The data provided by NHES indicate that as a society, we still have some progress to make on achieving this goal. 151 Table 1: Reasons for participation Improve, keep up, or advance in current job CR PD WR Age 16-24 9.1 25-34 30.7 35-44 43.9 45-54 50.9 55-64 53.3* 65-99 13.9* Gender Male 28.7 Female 23.4 Race White 27 Black 25.8 Hispanic 17.9 Other 21 Labor Force Status Employed 32.7 Unemployed 6.7* Not in labor force 9.3 Household Income $0 - $20,000 13.1 $20,001 - $40,000 26 Over $40,000 35.4 Industry Agriculture 31.7* Construction 33.6 Manufacturing 35.9 Transportation & 36.6 public utilities Retail & Wholesale 11.9 Finance 34.6 Service 29.2 Government 45 Misc industries 42 Total 25.8 * Cell contains less than 30 cases Train for new job or career CR PD WR Improve basic skills CR PD WR 12.7 14.7 18.4 18.5 15.6 3.8* 70 81.3 82.9 80.5 82.9 69.6 52.5 44.7 34.4 31.5 8.9* 14.0* 5.7 3.4 2.7 1.6* 1.2* 0.3* 15.4 8.2 5.6 4.4 5.4* 2.8* 0.3* 0.1* 0.5* 0.2* 0.4* 0.3* 0.1* 0.1* 0.3* 0.1* 0.5* 19.8 11.9 81.9 79.3 41.1 47.3 3.2 2.4 6 7.4 0.4* 0.1* 0.3* 0.2* 15 15.6 12.4 14.7 81.6 77.9 72.2 76.1 43.2 49.7 47 45.8 2.5 2.2* 3.6* 5.3* 6.4 7.6 9.9 7.4* 0.2* 0.2* 0.7* 19.1 10.9* 4.8 82.2 66.7 59.5 40.1 63 52.8 2.3 10.5* 2.5* 5.7 19.2* 17.9 0.2* 10.1 15.7 16.5 70.1 79.2 83.2 56.8 43.8 35.4 5.3 2.4 1.7 12.4 8.2 5 26.7* 19.0* 18.8 25.4 77.7 78.3 87.3 83.3 42.5* 38.7 39.4 36.8 2.0* 3.3* 2.0* 0.6* 5.8* 7.9 5.2 7.7 5.9* 8.4* 20.7 28 14 14.9 82.6 75.9 79.3 86.5 88.4 80.6 53.7 39.6 40.1 34.1 33.7 44.4 3.5* 3.7* 3 2.4* 2.0* 2.7 9.8 7.1 6.5 2.7* 3.4* 6.7 Meet requirements for diploma or degree CR PD WR Personal, family, or social reason CR PD WR 22.9 11.2 10.8 5.1* 7.3* 6.4* 12.3 4.1 3.9 3.2* 2.2* 0.4* 9.9 6.7 7.4 10.1 6.9 19.3* 14.8 12.9 10.5 12.4 29.2* 65.7* 69.1 76.7 73.7 76 80 95.2 4.5* 3.4 3.7 4 4.4* 7.1* 0.2* 0.3* 15.2 15.9 5.5 3.7 8.4 8.1 14.2 13 70.4 81.3 3.1 4.5 0.1* 0.8* 0.7* 0.2* 0.6* 0.6* 15.1 15.3 17.5 19.7 4 5.4 4.7* 9.0* 8 8.2 11.8 9.5* 14.2 8.6 16.7 13.5 77.8 75.2 78.1 69.7 3.5 5.4 5.4* 6.4* 0.5* 0.1* 1.5* 0.3* 0.2* 2.1* 0.2* 14 15 21.2 4.4 7.5* 3.9 8.2 6.0* 10.6 12.8 15.3 15.7 73.4 67.8 88.1 3.4 6.0* 11.1 0.2* 0.1* 0.3* 0.6* 0.1* 0.1* 0.4* 0.2* 0.2* 17.2 15.8 14.2 5.2 4.8 3.8 10 8.7 7.7 12.3 14.1 14.2 78.1 76.5 77.3 6.6 3.6 3.4 1.5* 0.6* 1.2* 0.3* 17.5* 17.9* 11.9 15.0* 11.7* 7.2* 3.5* 4.3* 11.4* 10.3* 3.3* 6.7* 8.2* 9.8* 12.2 10.5* 59 68.6 74.8 69.3 3.7* 3.6* 4.1* 2.1* 0.1* 0.3* 20.3 15.2 15.7 10.5* 10.1* 15.6 7.2 3.1* 4.6 4.2* 3.5* 4.4 4.3* 13.8 9.6 6.3 4.5* 8.2 14 9.8* 14.2 10.2* 13.8* 13.6 83 84.2 70.7 63.4 80.5 77.2 3.3* 2.2* 3.9 3.9* 2.6* 3.8 0.1* 0.3* 0.1* 0.1* 0.2* 0.3* 0.4* .2* .2* 0.6* .2* 152 Table 2: Demographics of adult education participants Credential f* Age 16-24 25-34 35-44 45-54 55-64 65-99 Gender Male Female Race White Black Hispanic Other Highest Grade Completed Up to 11th grade High school Vocational/technical school Some college Associates degree Bachelors degree Postbaccalaureate degree Labor Force Status Employed Unemployed Not in labor force Industry Agriculture Construction Manufacturing Transportation Retail & Wholesale Finance Service Government Misc. industries N/A Household Income $0 - $20,000 $20,001 - $40,000 Over $40,000 Total * % Personal Development f* % Work-Related f* % Population Total f* % 9,682 5,648 3,428 1,475 159 20 47.4 27.7 16.8 7.2 0.8 0.1 4,821 8,948 9,650 6,509 3,550 4,161 12.8 23.8 25.6 17.3 9.4 11.1 3,287 10,413 12,728 9,462 3,109 697 8.3 26.2 32.1 23.8 7.8 1.8 22,439 40,326 42,304 31,807 21,824 30,876 11.8 21.3 22.3 16.8 11.5 16.3 9,192 11,220 45.0 55.0 14,276 23,363 37.9 62.1 19,653 20,042 49.5 50.5 90,275 99,301 47.6 52.4 15,138 2,573 1,377 1,325 74.2 12.6 6.7 6.5 30,079 3,927 2,159 1,474 79.9 10.4 5.7 3.9 32,999 3,371 1,851 1,474 83.1 8.5 4.7 3.7 144,602 20,808 15,705 8,461 76.3 11.0 8.3 4.5 265 2,280 244 9,779 1,727 3,271 2,845 1.3 11.2 1.2 47.9 8.5 16.0 13.9 3,086 8,798 1,337 8,715 2,730 7,254 5,718 8.2 23.4 3.6 23.2 7.3 19.3 15.2 1,860 7,918 1,383 7,686 3,202 9,698 7,949 4.7 19.9 3.5 19.4 8.1 24.4 20.0 36,385 55,919 6,327 34,435 9,975 26,858 19,677 19.2 29.5 3.3 18.2 5.3 14.2 10.4 14,358 1,415 4,639 70.3 6.9 22.7 25,936 1,419 10,284 68.9 3.8 27.3 36,622 906 2,167 92.3 2.3 5.5 117,833 8,167 63,576 62.2 4.3 33.5 187 441 1,493 813 4,178 1,016 7,872 1,068 608 2,735 0.9 2.2 7.3 4.0 20.5 5.0 38.6 5.2 3.0 13.4 609 1,157 3,085 1,983 3,948 1,795 12,977 2,092 1,328 8,665 1.6 3.1 8.2 5.3 10.5 4.8 34.5 5.6 3.5 23.0 660 1,304 4,446 2,661 3,073 3,559 17,290 3,557 1,998 1,147 1.7 3.3 11.2 6.7 7.7 9.0 43.6 9.0 5.0 2.9 3,792 7,320 19,808 8,441 22,568 7,506 48,027 7,843 6,593 57,677 2.0 3.9 10.4 4.5 11.9 4.0 25.3 4.1 3.5 30.4 6,233 5,905 8,274 20,412 30.5 28.9 40.5 100.0 7,885 11,800 17,954 37,639 20.9 31.4 47.7 100.0 4,586 10,947 24,162 39,695 11.6 27.6 60.9 100.0 56,853 58,839 73,883 189,576 30.0 31.0 39.0 100.0 Frequencies are weighted population estimates and are reported in thousands. 153 References Bills, D. B. (1998a). Adult educational re-entry and the socioeconomic life course. Unpublished manuscript, University of Iowa. Bills, D. B. (1998b, May). Trends in participation in adult education between 1991 and 1995: Access and barriers. Paper presented at the 38th Annual Association for Institutional Research Forum, Minneapolis, MN. Bills, D. B. (1998c, May). The participation of adults in personal development courses: New evidence from the 1995 National Household Survey. Paper presented at the 38th Annual Association for Institutional Research Forum, Minneapolis, MN. Bills, D. B. (1999, August). Employer support of job-related education and training: Paying the cost to be the boss. Paper presented at the Annual Meeting of the American Sociological Association, Chicago. Brick, J. M., & Broene, P. (1997). Unit and item response, weighting, and imputation procedures in the 1995 National Household Education Survey (NHES:95) (NCES Publication No. WP 97-06). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Brick, J. M., Broene, P., James, P., & Severynse, J. (1997). A User’s Guide to WesVarPC. Rockville, MD: Westat Inc. Brick, J. M., Wernimont, J., & Montes, M. (1996). The 1995 National Household Education Survey: Re-interview results for the adult education component. (NCES Publication No. WP 96-14). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Collins, M. A., Brick, J. M., Kim, K., & Gilmore, S. (1996). User’s Manual: NHES 95: Adult education data file user’s manual (NCES Publication No. 96-826). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Collins, M. A., & Chandler, K. (1996). A guide to using data from the National Household Education Survey (NHES)(NCES Publication No. 96-891). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Fujita-Starck, P. J. (1996). Motivations and characteristics of adult students: Factor stability and construct validity of the Educational Participation Scale. Adult Education Quarterly, 47, 29-38. Henry, G. T., & Basile, K. C. (1994). Understanding the decision to participate in formal adult education. Adult Education Quarterly, 44, 64-82. 154 Hollenbeck, K. (1999, June). Providers of adult education. Paper presented at the 39th Annual Association for Institutional Research Forum, Seattle, WA. Kim, K., Collins, M., & McArthur, E. (1997). Participation of adults in English as a second language classes: 1994-95 (NCES Publication No. 97-319). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Kim, K., Collins, M., & Stowe, P. (1997a). National Household Education Survey of 1995: Adult education course coding manual (NCES Publication No. WP97-19). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Kim, K., Collins, M., & Stowe, P. (1997b). Participation in basic skills education: 1994-95 (NCES Publication No. 97-325). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Kim, K., Collins, M., Stowe, P., & Chandler, K. (1995). Forty percent of adults participate in adult education activities: 1994-95 (NCES Publication No. 95-823). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Maehl, W. H. (2000). Lifelong learning at its best: Innovative practices in adult credit programs. San Francisco: Jossey-Bass. McArthur, E. (1998). Adult participation in English-as-a-second-language (ESL) classes (NCES Publication No. 98-036). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Morstain, B. R., & Smart, J. C. (1974). Reasons for participation in adult education courses: A multivariate analysis of group differences. Adult Education, 2, 83-98. Nolin, M. J., Collins, M., & Brick, J. M. (1997). An overview of the National Household Education Survey: 1991, 1993, 1995, 1996 (NCES Publication No. 97-448). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Scanlan, C., & Darkenwald, G. G. (1990). Identifying deterrents to participation in continuing education. Adult Education Quarterly, 34, 155-166. Silva, T., Cahalan, M., Lacireno-Paquet, N. (1998). Adult education participation decisions and barriers: Review of conceptual frameworks and empirical studies. (NCES Publication No. WP 98-10). Washington, DC: U.S. Department of Education, National Center for Education Statistics. 155 Snyder, T. D., Hoffman, C. M., & Geddes, C. M. (1998). Digest of education statistics 1997 (NCES Publication No. 98-015). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Valentine, T., & Darkenwald, G. G. (1990). Deterrents to participation in adult education: Profiles of potential learners. Adult Education Quarterly, 41, 29-42. 156 CURRICULUM REVIEW AT A VIRTUAL UNIVERSITY: AN EXTERNAL FACULTY PANEL APPROACH Mitchell S. Nesler, Director of Research, Academic Programs Regents College Amanda M. Maynard,23 Research Associate Regents College Originally founded in 1971 by the New York State Board of Regents as the External Degree Program of The University of the State of New York, Regents College is a currently a private, independently chartered institution based in Albany, New York. It is governed by a board of trustees comprised of a national group of prominent leaders in education, business, and the professions. On January 1, 2001 Regents College will change its name to Excelsior College, although the college’s mission will remain the same. The mission of the college is to help remove barriers that exist for working adults in their quest for higher education while still maintaining rigorous standards of academic excellence in its external degree programs. Since it’s inception, more than 90,000 individuals have earned accredited associate and baccalaureate degrees in business, liberal arts, nursing, and technology from this unique college. Approximately 15 percent of the students enrolled in Regents College come from New York State; the remaining 85 percent come from all other states and several foreign countries. All of the college’s enrolled students (approximately 17,000) are at a distance. To ensure academic excellence, the college utilizes multiple methods and measures to assess program effectiveness. Graduate follow-up surveys, employer and/or supervisor surveys of graduates' work, and external faculty review of curriculum and program outcomes are just some of the measures of program effectiveness instituted by the College. Regents College does not have a resident faculty, just as it does not have resident students. Each degree program (business, liberal arts, nursing and technology) has a faculty committee that is responsible for overseeing its respective degree programs. The approximately 350 faculty of Regents College are drawn from many colleges and universities as well as from industry and health care facilities. They establish and monitor academic policies and standards, determine degree requirements and the ways in which credit can be earned, develop the content for all examinations, review the records of students to verify their degree requirement completion, and recommend degree conferral to the Board of Trustees. Review of the curricular structure is a challenging task for any college or university, but poses additional challenges for virtual universities. Regents College offers external degree programs in 18 concentrations within Liberal Arts. The faculty and administration were interested evaluating the curriculum structure for each of these concentrations in 23 Amanda M. Maynard is currently an Assistant Professor of Psychology at Bard College. 157 terms of both strengths and weaknesses. The overarching goal of the reviews was program improvement, the documentation of the curriculum’s equivalence to traditional four-year institutions, and an evaluation of the currency of the curriculum structure as compared to traditional four-year institutions. By their structure, external degree programs offer the student flexibility to obtain credit toward a degree from a variety of sources, including courses taken at accredited traditional institutions. Regents College also offers direct assessment of student learning through a suite of proficiency exams, developed by the college’s Assessment Unit. The selection of comparison institutions becomes a challenging task for the virtual university. The selection of comparison institutions for traditional institutions may focus on institutions of identical affiliation, student body size, entrance requirements, and geographic location. Virtual universities, however, serve students without such geographic boundaries. Regents College, in particular, serves traditionally underrepresented groups in higher education, does not have entrance requirements, and serves students from around the globe. One of the first challenges for conducting a curriculum review was selecting comparison institutions. A sensible approach is to conduct the curriculum review with the overarching goal of the review in mind during the design phase. The complexity of reviewing 18 concentrations within Liberal Arts was considered along with the nature and mission of the college. To lend some consistency to the review process, it was decided to select comparison institutions that would remain constant across each of the 18 reviews. Using a fixed set of institutions alleviates the potential for a given curriculum to fair well due to any particular characteristics of the institutions selected for a particular review. In addition, this practice allows for some comparisons in terms of the review of outcomes across programs. As the goal was to investigate the equivalence and currency of the Regents College curriculum as compared to those of traditional institutions, only fouryear institutions having majors identical in name each of the Regents College concentrations were selected as comparison institutions. From the set of traditional fouryear institutions having majors identical to the Regents College concentrations, the final set of ten institutions were selected varying in institution size, affiliation, and geographic location. The resulting set of institutions included institutions whose self-reported entrance difficulty for admission was reported as “moderately difficult,” and whose geographic location varied with the intention of selecting institutions representative of programs nationally and ensuring a rigorous review process. The Regents College Biology concentration was the first curriculum to undergo review. The remainder of this paper discusses the procedure of the review. Further outcomes of the curriculum review for Biology are discussed as well as the strengths and weaknesses of the procedure utilized. 158 Method Participants Again, the overall goal of the review was to ensure equivalence and currency of the Regents College curricula to that of traditional four-year institutions nationally; thus, the selection of external evaluators was approached with some of the same criteria used to select comparison institutions. Criteria used for selection of external faculty included extensive teaching experience in Biology, current affiliation with a four-year institution, and an openness to the notion of distance education and the mission of Regents College. Two Regents College faculty members with expertise in Biology nominated faculty external to the College for participation. The nomination procedure resulted in the selection of three faculty reviewers with no prior affiliation with Regents College. These faculty were from public and private institutions in the states of Ohio, Florida, and Pennsylvania. Materials The curriculum for the biology major in each of the ten comparison institutions was outlined adjacent to the Regents College biology curriculum (see Table 1 for the Regents College curriculum structure), resulting in 10 rating sheets. Reviewers were also asked to make a global rating of equivalence of the Regents College biology curriculum as compared to the 10 comparison institutions and their own home institution. Acknowledging the importance of the curriculum review in context, additional materials about Regents College were provided to the external panel. Among these materials were a Liberal Arts catalog, a copy of the Annual Report to the Faculty from the Academic Vice President, a listing of distance learning courses available to students obtained from the College's Distance Learn database (http://www.lifelonglearning.com), and sample status reports (i.e., transcripts) of recent biology graduates. Since course titles vary greatly across institutions, course descriptions for each of the ten comparison institutions were also provided, along with a brief description of each institution. Reviewers were also asked to give an overall rating of currency of the Biology concentration curriculum structure. For purposes of the review, curricular currency has been defined as the degree to which the curriculum under evaluation “compares to the current research and thinking” in a particular discipline. Therefore, to be current, courses in the curriculum must represent those topics considered seminal and reflective of the changes in the field over time, such that new approaches to a topic are reflected in the course opportunities for students. In addition, through its Outcomes Assessment Framework (Peinovich & Nesler, 2000), Regents College has developed a set of learning outcomes, called objectives, for each of its external degree programs. Reviewers were also asked to evaluate the learning objectives for the Biology concentration (see Table 2) in terms of their equivalence and currency as compared to their knowledge of the field and their home institutions. 159 Procedure External faculty were nominated and contacted for their willingness to participate in the review. Upon the decision to participate, each panel member received the ratings packet and supplemental materials. The actual reviews were conducted individually and the panel “met” via two teleconferences. Materials were sent to faculty in advance of the first teleconference to allow time for review of the materials prior to the discussion. The first teleconference served as an orientation to the college and to the curriculum review process. Discussion of the college’s history, mission, characteristics of the student body were discussed. In the time between the first and second teleconferences, faculty completed their ratings packet, comparing the Regents College biology curriculum to the curriculum of the selected peer institutions and making judgments as to its equivalence and currency. Overall ratings of the curriculum equivalence and currency were also obtained. In addition, program objectives were also rated for their overall equivalence and currency. All ratings were made on a seven-point scale ranging from 1 (not at all equivalent/current) to 7 (very equivalent/current). One week after the introductory teleconference, during the second teleconference, a discussion of the ratings of each individual peer institution along with overall ratings were reported and discussed. Strengths, weaknesses, and recommendations for changes to the curriculum followed the ratings discussion. A report was drafted based on the recommendations of the external panel for presentation to the faculty for review and consideration. Results During the second teleconference, the external review panel reported difficulty in the task of assessing equivalence of the curriculum. Discussion of the ratings indicated that lack of equivalence between the Regents College biology curriculum and biology curriculum of the peer institution could be a function of rigor in either curricula. Thus, the reviewers recommended that the scale anchors be changed to "Not at all Rigorous" and "Very Rigorous" for future reviews. As a result, the following discussion of the reviewers' comments regarding the curriculum is qualitative in nature. In most cases, the reviewers reported that the Regents College Biology curriculum was equivalent to the biology curriculum of their home institution. The external panel cited the required history of science or bio ethics course as a major strength of Regents College Biology curriculum. Other strengths included the requirement for a course in developmental biology and the breadth of choice in the curriculum. The molecular biology requirement was also noted as being current. Two weaknesses were reported by the panel: (1) the absence of a course emphasizing Biodiversity, and (2) the possibility of substituting a course in Evolution for a course in Genetics. 160 In terms of the program objectives, the external panel indicated that few programs outline such objectives for their programs, but that the objectives appeared to be reflective of the curriculum structure. This was found to be a strength of the Regents program. The panel thus indicated that the objectives were equivalent and current. One recommended change to the objectives was to change made the word “systematic” to “systems” biology in Objective #4 (see Table 2). Based on the discussion, the panel recommended the following curricular changes: (1) to make Genetics a required course in intermediate and upper-level courses (Level II), removing Evolution as an alternate choice; (2) to move Evolution to the electives level (Level III) of the curriculum; (3) to insert a course in Biodiversity into the core level (Level I); (4) to change "Systematic Biology" in Level IIC to "Systems Biology"; and (5) to revise objectives as needed based on the above recommendations. Discussion The curriculum review process provided constructive feedback about the biology concentration curriculum as structured. The external reviewers generally indicated that the Regents College Biology concentration was quite comparable to that of the traditional four-year comparison institutions and to their home institutions. Recommendations included that genetics be a required course without the opportunity use evolution as a substitute course. Next, the panel recommended the insertion of a Biodiversity course into the core requirements. Finally, a revision of the language of “systematic” biology to “systems” biology in the curricular structure and program objectives in biology was proposed as a change. The Liberal Arts Faculty voted to approve each of the recommendations at their Fall 1999 meeting. Overall, the procedure utilized ran smoothly. The first teleconference was initially anticipated to last approximately 30-60 minutes. However, the teleconference length was approximately two hours. While it was longer than anticipated, the length allowed for the development of rapport among reviewers, thus facilitating conversation in the second teleconference, which was also about two hours long. External reviewers engaged in lively discussion of the curriculum while acknowledging the mission of the College. With respect to modifications in the review process itself, the rating scale anchors were changed for subsequent reviews. Faculty panel members indicated that the task of rating equivalence was difficult because the curricula of the two institutions could be nonequivalent but for different reasons (i.e., strengths or weaknesses in either curricula). Because the goal of the review was to ensure that students completing an external degree program at Regents College were obtaining an equivalently rigorous academic experience, the scale anchors were revised to read “Not at all Rigorous” and “Very Rigorous.” To facilitate ratings in subsequent reviews, an adapted definition of rigor (Spahn, 1998) was adopted by the Liberal Arts faculty, such that rigor has been defined as “ strong base of knowledge and understanding through a thorough and challenging 161 learning experience” (Liberal Arts Faculty, 1999). The change in anchors will hopefully decrease the ambiguity in the rating task. In summary, outcomes of the review were viewed as positive from the perspective of the faculty and external panel, and the process itself was economically feasible. The curriculum structure review process itself is recommended as one step in the overall review of a program which balances economic feasibility with the qualitatively rich information provided for program improvement. Table 1 THE REGENTS COLLEGE BIOLOGY CONCENTRATION CURRICULUM STRUCTURE I. Core Required Courses A. Introductory Biology B. Cell/Molecular Biology II. Required Areas (Choose at least One Course from each of the following areas) A. Genetics & Evolution B. History of Science/ Bioethics C. Systematic Biology (Animal/Plant) Including Anatomy & Physiology; Intermediate Botany, Vertebrate Physiology; Histology D. Ecology E. Development (Embryology, Developmental Biology) III. Electives Total Credit Hours: 30 Hours (15 of which must be upper level) 162 Table 2 THE REGENTS COLLEGE BIOLOGY PROGRAM OBJECTIVES 1. Describe the essential functions of cellular systems and the interrelationships of organisms and populations. 2. Define and apply the underlying principles of genetics or explain current theories of evolution. 3. Demonstrate an understanding of major innovations in the history of science or analyze current problems in bioethics using a variety of currently held assumptions. 4. Demonstrate upper level knowledge of systematic approaches to the study of life forms. 5. Demonstrate knowledge of ecological systems. 6. Demonstrate knowledge of modes of development among life forms. References Peinovich, P. E. & Nesler, M. S. (2000). Regents College Outcomes Assessment Framework. Albany, New York: Regents College, Academic Affairs. Spahn, K. (1998, May). Rigor analysis: A comparative study of curriculum rigor across undergraduate and graduate courses. Paper presented at the 38th Annual Forum of The Association for Institutional Research, Minneapolis, Minnesota. 163 164 THE IR-CQI CONNECTION Tracy Polinsky Coordinator of Institutional Research Butler County Community College Introduction What is CQI? Like its industrial counterpart TQM (Total Quality Management), Continuous Quality Improvement, or CQI, became popular a few decades ago in the United States. It was presented as a means of achieving organizational excellence, and many jumped on the bandwagon. As is common with approaches du jour, "quality" appealed to many, was embraced by some, and was seriously adopted as a way of conducting business by few. Today, CQI is alive and well at certain institutions of higher education, who are making a conscious and ongoing effort to integrate CQI philosophies and tools into their problemsolving and process-improvement endeavors. Many view CQI as a rigid formula to which they must adhere; however, there is nothing magical about CQI in and of itself. Quality means excellence. CQI then, involves striving for excellence (good enough is not good enough) and continuously trying to improve oneself or one's institution. In order to better itself, an institution must first identify areas for improvement. The primary way to identify these areas is through assessment. Whether it is of a quantitative or qualitative nature, this assessment must yield accurate and reliable information on which decisions can be based. Because institutional researchers are by nature evaluators and collectors of data, they are a logical and valuable part of any college or university's CQI team. This Institutional Researcher's Experience with CQI In December 1998, I was asked to join the CQI Steering Committee at Butler County Community College. The committee's mission was twofold. Members were to monitor the effectiveness of college committees and to serve as official CQI experts and trainers for the campus community. The Steering Committee was comprised of 13 individuals, several of whom underwent intensive CQI "Trainer's Training" in spring 1999. The mission of the emergent Training Team was to help groups solve specific problems using the CQI approach. As a result, individuals would not only leave with practical solutions to their present problem, but would also be able to apply CQI strategies to other problems or processes. Since then, this CQI Training Team, of which I am part, has facilitated four problem-solving or process-improvement "workshops" at the college: 165 • A Scheduling Assessment Meeting (July 1999) arranged by the President for the President, President's Cabinet, and invited guests. The Training Team led participants through two days of examining the college's credit course schedule and arriving at ways to increase enrollment by making adjustments to the schedule. Since then, several of the ideas have been implemented at the college. • A project conducted by the Advising Task Force (started in October 1999, ongoing). The Training Team led the task force through an examination of the current advising process and obstacles to successful advising. The group is currently exploring various advising models to determine which model would address these issues and work best at the college. • A Service Excellence workshop conducted on the college's Professional Day (February 2000) for all front-line staff. Members of the Training Team engaged the group in various activities and taught the group CQI principles and strategies for continuous improvement. The Training Team helped participants develop service themes and standards of excellence for their work areas as well as ways to evaluate their success. • A project undertaken by the CQI Steering Committee to improve the communication process at the college (initiated in February 2000, ongoing). The Training Team led the group through an examination of communication at the college. Once the root causes of ineffective communication were uncovered, the group addressed them and developed a model for effective communication that has been recommended to the President for implementation. The Training Team is also teaching other committee members to become CQI trainers. Why Should an Institutional Researcher be Involved with CQI? Scientific Approach CQI is deeply rooted in the scientific approach. Whether it is in problem identification or problem solving, a systematic approach is imperative. Processes must be carefully observed, studied, and documented, and the root causes of problems identified (as opposed to symptoms or "obvious" causes). Successful statisticians and researchers are by nature conscientious investigators and recorders of data and events. They know the importance of documentation and how much it will mean down the road. They are methodical and know how to collect valid, reliable, meaningful, and pertinent data. CQI can be represented by the PDCA (Plan - Do - Check - Act) Cycle. In the planning stages of a CQI project, a problem-solving or process-improvement strategy is developed. "Do" refers to the implementation of the plan. During the checking phase, the phenomenon is studied to see if the implemented change made a difference. The group 166 then adjusts its strategy during the "Act" phase, which then leads back to "Plan" and so on. This cycle can be likened unto a scientific experiment wherein the strategy is the independent variable and the phenomenon of interest (ex. enrollment) is the dependent variable. Of course, unlike rigorous scientific experiments, it is nearly impossible to control for extraneous variables in a real-life college setting. Data In quality efforts, decisions are no longer based on hunches or anecdotal information, but on sound data. Here, the importance of the institutional researcher on a CQI team can not be overstated. They are skilled at a) collecting data, b) analyzing data, and c) communicating data. Often, individuals believe information is needed but do not know how to obtain it. The institutional researcher usually knows if the data already exist and the best way to procure information when it does not. They know how to design and conduct surveys, focus groups, and the like. They also know how to collect data properly, that is, to ensure that the data collected is valid and reliable. Once the data are obtained, they must be understood. Namely, the data must be manipulated so that they are capable of answering the group's question(s). There are responses to be interpreted, data to be entered, and statistics to be applied. Researchers are also good pattern spotters and theme identifiers. Finally, it is not enough for the data analyst to understand the information; he/she must be able to effectively communicate it to others. Institutional researchers are well versed in the art of data reporting, having experience presenting data in virtually every format -- written and oral reports, Power Point presentations, tables, charts, and flipcharts. Most importantly, they know how to present information in a way that is understandable, logical, and relevant to their audience. Customer Focus CQI maintains a customer-oriented philosophy. Because institutions of higher education exist to provide services to their customers (students, community, etc.), they must be confident that their customers are pleased. In CQI, as in IR, customer feedback is an essential component of the improvement process. Institutional researchers understand the importance of obtaining feedback (especially when calculating response rates). A large part of their jobs entail administering satisfaction surveys to the institution's customers, primarily its students and former students. 167 Assessment At numerous points during a project, a CQI team relies on assessment. At the beginning stages, the current situation must be assessed. In later stages, the CQI team must evaluate proposed solutions. But the bulk of evaluation takes place during the "Check" phase of the PDCA Cycle. While this assessment does not occur until after a plan or solution is implemented, it must be mapped out during the initial planning stage. Institutional researchers are valuable if not necessary components of a CQI team if for no other reason than to guide the team through assessment planning. While most individuals on the CQI team are familiar with assessment in a general sense, they are usually not proficient at formulating an effective assessment plan from scratch. Measurable objectives or outcomes are a critical component of any assessment plan, yet writing such "operational definitions" is not a skill that comes naturally to most people. Some may have never been exposed to such a thing, while others are simply out of practice. At any rate, the institutional researcher can assist them in this process. When planning for assessment, evaluation criteria must be written. These criteria will later help the group determine if the implementation of their solution(s) helped them to achieved their goal(s). They must know what is to be measured, how to measure it, and how they will know if their solution was successful. Because institutional researchers are experienced measurers and writers of such objectives, they can not only facilitate the group's composition of the objectives, but also teach them these skills directly. Mission A focus on the mission of the institution is imperative to CQI. Projects and plans must be aligned with the institution's purpose, and individuals must be committed to not only meeting but also exceeding the college’s goals. The institutional researcher, if involved in institutional effectiveness activities, already understands the foundational nature of the mission. She/he knows how to derive measurable outcomes from an institution's mission and objectives and how to collect data to determine if the institution is achieving its goals. Resistance It is natural for many individuals to resist assessment and to fear change. Many perceive assessment as a faultfinding mission, and hence a threat to their security and to the status quo. They may approach CQI efforts with caution, trepidation, or outright resistance. Institutional researchers face these challenges every day, at times worse than others. They realize the importance of introducing assessment and change slowly and carefully into an organization's existing system. And hopefully they have acquired a sensitivity to the concerns of others and have found ways to successfully assuage them. 168 An Example of the IR-CQI Connection in Practice In summer 1999, the college President requisitioned the services of the newly formed CQI Training Team. He asked the team to facilitate a study of the credit course schedule and its possible effects on enrollment. Invited to attend this "Scheduling Assessment Meeting" were members of the President's Cabinet and other guests. Before the meeting, the Training Team spent many hours reviewing what they had learned and preparing for the project. Being the first time to lead a group through a project via CQI, the preparations were arduous and exhausting. In fact, it was at this point that one of the six original members of the Training Team resigned. Eventually, the team developed a plan that would seemingly address the issues and satisfy its charge. Pre-Meeting The CQI Training Team assembled relevant student data on the topic. These data consisted of the results of student surveys and focus groups. In short, the information revealed the most critical issues surrounding class scheduling from the perspective of the students. Also prior to the Scheduling Assessment Meeting, the Training Team asked participants to collect data from others in their divisions. Questions such as "How is the credit course schedule developed?" and "What factors influence enrollment?" were used to generate discussion. Participants were asked to bring this information with them when they attended the meeting. Thus, data collection was the first step in the CQI process, allowing the group to analyze the current situation before engaging in process improvement. Introduction The first part of the meeting involved an introduction to the topic and a statement of objectives. The Training Team also introduced participants to Continuous Quality Improvement, including an orientation to effective teamwork, the improvement cycle (Plan - Do - Check - Act), and some basic CQI tools. Identifying Relevant Issues Brainstorming was used as a means of generating many ideas. Participants were asked to record responses to the question, "What issues must be considered when developing and implementing the credit course schedule?" Student data and data collected from participants' staff were incorporated at this point. 169 The CQI Training Team then led the group through an Affinity Diagram whereby ideas generated from brainstorming were clarified, discussed, and clustered according to common themes. Identifying Root Causes Once major themes were identified, root causes of less-than-maximum enrollment (with respect to the credit course schedule) were sought. An Interrelationship Digraph helped the group to examine and graphically portray the cause and effect links among the "idea clusters" generated. Once completed, the group was able to determine which scheduling issues were affecting enrollment at the most fundamental level. Establishing Evaluation Criteria Participants received instruction in evaluation, including clarification and explanation of evaluation terminology. Enrollment data were also presented that enabled the group to identify benchmarks and goals. The group was then led through the development of Evaluation Criteria, via Brainstorming and 10-4 Voting (a CQI decision-making tool). During this stage, the group determined how they would measure enrollment during the later "Check" phase of the PDCA Cycle. The Evaluation Criteria established would later determine to what extent the implemented solutions accomplished the group's objective(s). Identifying and Choosing Solutions Brainstorming and an Affinity Diagram were again used to generate then group all possible solutions. Participants then composed solution statements for each of the clusters that described the actions that would need to be taken. The CQI Training Team next facilitated the establishment of Decision Criteria against which the solution statements were judged. These criteria served as a "reality check" for the solutions generated, by asking, in a sense, if the recommended actions were "do-able" and worth the effort. The participants voted to prioritize the solutions. Conclusion The importance of evaluation was underscored. The meeting was summarized, and the group’s original objectives were revisited. The group discussed what actions would be taken after the meeting. Finally, the CQI Training Team asked all participants to complete an evaluation of the meeting itself. These results were later analyzed and reviewed by the CQI Training Team who used them to improve their own training efforts. 170 Post-Meeting Decisions were made by the appropriate individuals to implement several of the solutions generated at the meeting. No official assessment has been conducted at this point. A CQI newsletter was designed and issued in May 2000, which informed the campus community of the status of the project. My Unique Contribution to the Project as an Institutional Researcher • Data collector. As an institutional researcher, I was the resident data "expert." I knew what data existed, where to get it, and how to get it. This applied to both quantitative and qualitative data. I readily knew what the data meant -- and what it didn't mean. And when data was needed that did not exist, I knew how to devise a way to get it. • Theme identifier. I was able to recognize emerging themes quickly and easily, particularly during the Affinity Diagram activities when a plethora of ideas had to be clustered and "boiled down." I believe this ability comes primarily from my qualitative research experience, but also from my experience as a trend and pattern spotter, survey researcher, environmental scanner, and general data analyst. • Relationship identifier. Whether we are drawn to institutional research because we are scientific and analytical, or whether our jobs make us this way…the bottom line is that researchers have certain characteristics. We understand relationships between and among variables. This skill was an asset particularly during the group's search for root causes of the phenomenon. I understood the cause and effect relationship between items, and I knew what could and could not be concluded based on the given data. • Assessment specialist. Nowhere was my existence (as an institutional researcher) more critical to the project than in evaluation. Because I spend nearly half of my time planning, conducting, interpreting, and reporting the results of assessment, I have become one of the college's evaluation "experts." I was able to educate the group in evaluation concepts and terminology, as well facilitate their writing of evaluation criteria (outcomes). CQI Learning Experiences Wow, what a trip. When mere babes, we were charged by the President to facilitate a project to increase enrollment. No pressure there! I would be lying if I said it was easy. It was stressful, demanding, and exhausting, but above all it was time consuming. Although subsequent CQI projects have become easier, they have all been time intensive. The amount of time spent preparing for CQI workshops is beyond anything we ever imagined. 171 I can tell you that if you are considering joining (or initiating) CQI efforts at your institution, you must be prepared to work hard. And only join if you are an intrinsically motivated person. Now for specifics. We learned to be prepared for anything and to be flexible enough to change our course when necessary. Some portions of the workshop may take less or more time than expected -- be prepared to adjust quickly. You may notice in the afternoon that the participants are "brain-dead." We did, and decided to call it quits and set up a second session for another day. The CQI Training Team simply used the time in between to review what had transpired and plan accordingly for the next meeting. In the worst case scenario, a tool you have chosen to use may "flop." In other words, it may not accomplish your goal. This happened to us during the Scheduling Assessment Meeting. We had chosen to use a Fishbone to uncover root causes. While the Fishbone was an excellent tool in theory, it became literally too large to manage. (Since then, we have made modifications to it and have used it successfully.) We ended up taking the major ideas that resulted from the Fishbone and switching to the Interrelationship Digraph to address them. We used this opportunity to show the participants that we were monitoring and constantly improving our own training program, which was very CQI-ish. As thorough as you are, you will never think of everything. We had planned our training without realizing the limited knowledge the participants had regarding assessment and goal setting. The vacant stares I received during the Evaluation Criteria phase indicated that we needed to educate them before we could proceed. So between the first and second sessions, we put together a "lesson" on evaluation and how to write measurable outcomes. Once they were taught the necessary information (a refresher for many, I suspect), they were ready to dig back in. We learned logistical things, like how to best utilize the physical space of the meeting room. We have become experts at table, prop, and poster arrangement. We know how big the lettering needs to be for people from a certain distance to read it and what kind of markers do not bleed through onto the walls. Then there were the "little things." We have found that these are precisely the things that can make the biggest difference. For example, we always put bowls of goodies on the tables at the beginning of the day, filled with gum, mints, chocolate, aspirin… People love it! Despite the foibles, the CQI Training Team believed that we did a pretty darn good job conducting our first session. And the feedback confirmed it! The evaluation forms that participants (anonymously) filled out contained some suggestions for improvement. But overall, they indicated that the participants thought very highly of the work we had done. In fact, many went out of their way to personally thank us for our efforts, which we greatly appreciated. 172 Conclusion Given the amount of time and energy required to produce successful CQI problemsolving and process-improvement sessions, why would anyone voluntarily put himself/herself through this? The answer is simple. We believe that the CQI approach results in better decision making. It is not a panacea. Personally, I see it as a system that forces individuals to solve problems and improve processes in a logical and systematic way. In other words, I see it as a way of thinking rather than as a set of techniques. The tools are there primarily to promote sound thinking. As with anything, in order for CQI endeavors to be successful, the right players must be assembled. I have found that people with certain personality traits and values are wellsuited and appreciated members of a CQI team. In addition, particular individuals are valuable if not essential to a college's CQI efforts because of their unique experiences. The presence of an institutional researcher (or similar person) will make a difference. Their knowledge, skills, and understanding in scientific and systematic problem solving, data collection and analysis, and evaluation will undoubtedly aid an institution that is committed to Continuous Quality Improvement. References Elfner, Eliot S. (1995). Assessment and continuous quality improvement. In James O. Nichols, A practitioner's handbook for institutional effectiveness and student outcomes assessment implementation (pp. 205-221). New York: Agathon Press. Scholtes, Peter R. (1994). The team handbook. Madison, Wisconsin: Joiner Associates Inc. 173 174 WE CAN’T GET THERE IN TIME: ASSESSING THE TIME BETWEEN CLASSES AND CLASSROOM DISRUPTIONS Stephen R. Porter Director, Office of Institutional Research Wesleyan University Paul D. Umbach Graduate Research Assistant, Department of Education, Policy and Leadership University of Maryland Abstract In response to student and faculty complaints about the amount of time available to travel between classes, an analysis of the time between classes problem was conducted at a large, public research university. Using facilities, course scheduling and student survey data, we discovered that many students had distances to travel between classes that would normally take longer than the allotted ten minutes. This forced them to leave class early, arrive to class late or skip class altogether and often left them with an inadequate amount of time to complete exams. These analyses supported a decision to implement a policy regarding student scheduling. Introduction Colleges and universities across the country are increasingly focusing their attention on the classroom behavior of students. A recent article in the Chronicle of Higher Education (Schneider 1998) suggests a rise in uncivil behavior of college students that ranges from arriving late to classes to physical assaults on faculty. One faculty member believes that the current generation of college students is more apathetic than in the past and is more likely to display uncivil behavior than ever before (Sacks 1996). Other research indicates that classroom incivilities and disruptions continue to have a tremendous impact on classroom learning (Boice 1996). The costs of classroom incivilities are high. Not only does the increasing frequency of uncivil behavior impede the learning process, it also causes students to grow more “uninvolved, oppositional and combative” (Boice 1996, p. 480). Colleges and universities across the country are forming task forces and committees to examine the problem of classroom incivility and possible solutions. One of the most common forms of uncivil behavior is students arriving late to class and leaving early (Boice 1996). Most would agree that these disruptions can be attributed to individual student motivation and disinterest (Wyatt 1992); however, on a large campus students may be arriving to class late and leaving early because of the 175 distances they must travel to get from one class to another. Does the common ten-minute interval between classes give students enough time to get from one side of the campus to another? While a great deal of attention has been paid to students’ reasons for disrupting class, little research has been done to assess the impact of distance between classes and classroom disruptions. If the allotted time between classes were not enough, many campuses would be faced with a difficult and perhaps costly policy decision. Colleges could simply choose to accept students’ tardiness and change scheduling practices by increasing the amount of time between classes. To make such a significant change in scheduling would create logistical challenges and cost perhaps thousands of dollars to implement. Colleges could also take measures that would attempt to change student behavior. Either option is certain to be difficult and costly. Before making such a dramatic policy decision, colleges would be wise to assess the impact that distance between classes has on students. The University of Maryland, College Park, a large, public research university, was faced with such a policy decision. Students had become increasingly vocal about the difficulties they experienced arriving to class on time when they had only ten minutes to walk across campus. Given that the campus is approximately two square miles and consists of more than 400 buildings, few faculty and administrators were skeptical of the problems students were encountering. In addition, faculty were complaining of disruptions in class due to students arriving late and leaving early, and some faculty claimed that students had approached them with concerns about arriving to class on time when they were faced with only ten minutes to make large treks across campus. A campus committee of administrators and faculty was appointed by the Provost to address the issue of distance between classes. The committee was tasked with understanding the extent of the time between classes problem and its impact on campus. Understanding the extent of the problem was especially important given the substantial costs of proposed changes to the class schedule. The campus had not performed any previous analyses on this topic, so we set out to collect and analyze data that would inform and assist the committee in their decision making. Approach In collecting reliable information for the task force, we combined “hard” data from the university course scheduling system with “soft” student survey data. To understand the extent of the problem we first estimated the time it takes to walk between classes using Fall 1999 facilities and course data. We then used the data to classify undergraduate students into three groups: students with no Monday-Wednesday-Friday (MWF) back-to-back classes, students with MWF back-to-back classes who could travel between the classrooms in ten minutes or less, and students with MWF back-to-back classes whose travel time between the classrooms was greater than ten minutes. (Tuesday-Thursday classes were not considered because of their longer fifteen minute break between classes.) 176 These three groups of students were surveyed via email and the Internet to determine their support for changing the course schedule as well as their actions in response to the time between classes problem, and why they chose a course schedule that made it difficult to travel between classrooms. The survey was conducted at the beginning of the Spring 2000 semester and comprised an initial email describing the location of the survey website, followed by three followup emails. The response rate was 40%. Calculating times and distances between classes To understand how long it takes to travel between classrooms across the campus, the Office of Records & Registration initially approached the problem by having students actually time how long it took to walk between pairs of buildings. The magnitude of this effort quickly became apparent and the project was abandoned. There are almost 4,500 unique building pairs for courses taught during the Fall 1999 semester, and having someone walk and time the distances between all the pairs was simply not practical. Indeed, this was the major stumbling block for the project, and we were forced to develop an alternative method to measure the times and distances between classrooms. Our solution was simple. We combined the two dozen building pairs that had been measured by Records & Registration with an estimated distance for each pair to run a bivariate regression model predicting travel time using estimated distance. We then applied the results to the estimated distances for all building pairs to derive an estimated travel time for each building pair. This approach allowed us to calculate a reasonable accurate travel time that only required measuring travel times between a few building pairs. At our request, a detailed map was generated by Facility Drawings with a layout of 100 yards per grid square (see Figure 1). Each grid line on the map was numbered starting with zero. The grid coordinates for each classroom building were then determined and used to calculate the Euclidean distance (i.e., distance “as the crow flies”) in hundreds of yards between each possible classroom building pair. From their previous attempt, Records & Registration had already timed approximately two dozen trips between building pairs. Combining these times with the respective calculated distances in a simple bivariate regression provided an estimated walking time per hundred yards of distance. The bivariate regression equation fit the data well (R2=.88), and according to the model results it takes on average a little over one minute to walk 100 yards across campus, a plausible result. Using the estimated distances from the grid map calculations and the relationship between walking time and distance from the regression model allowed us to estimate a travel time for all instructional building pairs on campus. Two minutes were added to these times to account for miscellaneous actions such as bathroom breaks between classes, the time it takes to get from building entrances to classrooms, etc. 177 Table 1 shows the distribution of walking times for undergraduate students with MWF back-to-back classes during the Fall 1999 semester. The first column gives the number and frequency of student/classes per week. For example, if a student has a class on MWF that is followed by a class that meets only on Monday, she is counted once in this column. If she had a class on MWF followed by another MWF class, this student is counted three times. The second column gives the number and frequency of only students. Out of the 8,924 students with back-to- back classes, 2,570 (28.8%) have one or more back-to-back classes with walking times of 10 minutes or more. These students comprise 10.4% of all undergraduates registered during the Fall 1999 semester. From the preceding analysis, we can see that the time between classes problem is substantial. During the Fall 1999 semester over 2,500 students registered for classes that were too far apart to travel between during the ten-minute break. While it is possible that these students were still able to travel between classes in the allotted time, such a large number of students indicate that class disruptions due to these schedules could be significant. Survey data Student responses to not having enough time to travel between classes are listed in Table 2. Students in the third group, those students registering for at least one pair of MWF back-to-back classes in classrooms greater than a ten-minute walk apart, were asked their actions in response to their back-to-back class schedule. Students were allowed to choose more than one action. Only 23% responded that they had enough time traveling between classes. The most common student response was leaving class early, with over half the group indicating that they chose this course of action. About 12% indicated they arrived for class late, and about 11% simply skipped class. Disturbingly, almost 40% stated they had difficulty completing examinations because of their schedules. The survey results indicate that over three-fourths of the students estimated to have problems traveling between classes did indeed have problems. Most of these students reacted by leaving class early, with smaller proportions arriving to class late or skipping class altogether. In addition, we note that these disruptions are not randomly distributed amongst all class types. Because juniors and seniors will be taking a larger proportion of courses that satisfy their major, and because courses within a major tend to be taught within the same one or two buildings, freshmen should be more likely to have problematic class schedules. Using our estimated data from Table 1 and student class, of the students with back-to-back classes on MWF in Fall 1999, freshmen were less likely to have ten minute or less travel times than upperclassmen. About 65% of freshmen with back-to-back classes had travel times of less than ten minutes, compared with 71% of sophomores, 76% of juniors and 76% of seniors. 178 Finally, we asked students why they constructed class schedules that made it difficult to travel between classrooms in the allotted ten minutes. Students were again allowed to choose more than one reason. The results are presented in Table 3. The two most common reasons were related to the courses themselves: either one of the courses was a required course or it was the only course offered at the time needed. Interestingly, the third most popular response was “wanted a compact course schedule.” Many students are registering for back-to-back courses not only out of necessity, but also out of convenience. From the focus group where we pilot tested the student survey, many participants said that students schedule these back-to-back classes so that they won't have any “wasted” time by having a half-hour or hour between classes. Conclusion These data provided a great deal of insight into the problem of the time between classes and informed the task force committee about several aspects of the problem. First, the data indicate that the amount of time between classes is a significant problem at the University of Maryland, College Park. According to our analysis, a large proportion (over one fourth) of the students taking back-to-back classes on MWF do not have enough time between classes to arrive on time to their next class. Second, the limited time students have to travel from one class to another is affecting the learning process. It causes students to leave class early, arrive to class late, and skip classes altogether, impacting both the individual student and the classroom as a whole. The time between classes also appears to limit the contact some students have with faculty. Most alarming is that a large proportion of students indicate they have encountered difficulties in finishing exams due to the time they have to travel between classes. Freshmen are also more likely to have back-to-back classes in rooms far apart. Given the importance that the first year of college has on the success of students, this is of particular concern. Students indicated that they selected their back-to-back scheduling for many reasons, but most did so out of necessity. Students were forced to schedule classes due to limited offerings, major course requirements, and time conflicts. However, students’ reasons for scheduling back-to-back classes also indicate that they do so not only because of the unavailability of courses but also out of convenience. Many students want compact schedules and appear to recognize the problems in scheduling back-to-back courses. Given that today’s students often work to help pay for their education, it is not surprising they would want a compact schedule that allows them to pursue those efforts. As the traditional college education where students reside on campus and attend school full-time gives way to students living at home and working while attending college, administrators must increasingly take into account the external pressures faced by students when determining scheduling policies. Armed with our analysis, the committee was faced with a difficult policy decision. They had two options: they could either accept student scheduling behavior and change 179 the current scheduling system, or they could keep the current schedule and take measures that would attempt to change student behavior. With so many students experiencing problems, the committee knew that some action must be taken to help alleviate the problem. Given the costs and the enormous task of changing scheduling practices by increasing the time between classes from ten to fifteen minutes, the committee recognized the impracticality of allowing more time between classes. They recognized that a problem did exist and began to search for other solutions. One inexpensive solution was to use the results from our distance-time analysis to flag students at registration who may have problems. Currently, the University is working to implement a warning into the registration program that will notify students when they have scheduled back-to-back classes that are more than a ten-minute walk apart. So, as student register for classes by phone or the Internet, they will be warned when they are scheduling back-to-back classes that may be in buildings that are too far apart. 180 Figure 1. University of Maryland, College Park Facilities Map with 100-Yard Grid 181 Table 1. Distribution of the Time Between MWF Back-to-Back Undergraduate Classes, Fall 1999 Student/classes per week Students Time between classes Number Percent Number Percent Less than 10 minutes 13,251 69.9% 6,354 71.2% 10 - 10:59 1,555 8.2% 659 7.4% 11 - 11:59 1,168 6.2% 540 6.1% 12 - 12:59 623 3.3% 336 3.8% 13 - 13:59 862 4.5% 408 4.6% 14 - 14:59 819 4.3% 326 3.7% 15 - 15:59 408 2.2% 184 2.1% 16 minutes or more 264 1.4% 117 1.3% 18,950 100.0% 8,924 100.0% TOTAL 182 Table 2. Student Reactions to the Time Between Classes Problem I left class early. 56.6% I arrived at class late. 12.1% I skipped class because I was running late. 10.7% I had difficulty completing in-class examinations. 39.0% I was unable to speak with the instructor after class. 11.0% I did not have any problems getting to class on time. 23.1% Note: N=290. Question: “Which of the following did you tend to do because of this back-to-back class schedule? Please check all that apply.” 183 Table 3. Reasons Why Students Schedule Back-to-Back Classes in Rooms Far Apart Accommodate my work schedule. 25.2% Accommodate family schedule. 3.1% At least one is a required course. 49.7% Only course offered at the time I needed. 43.1% Only course available when I scheduled classes. 30.3% Wanted a compact schedule. 37.6% Limited course offerings. 24.1% Had other scheduling conflicts. 35.2% Transportation issues (bus, metro, car pool, rush hours, etc.). 6.2% Other. 5.2% Note: N=290. Question: “Why did you schedule these two courses back-to-back? Choose as many reasons as apply.” 184 References Boice, B. 1996. Classroom Incivilities. Research in Higher Education 37 (4):453486. Sacks, P. 1996. Generation X Goes to College: An Eye-Opening Account of Teaching in Postmodern America. Chicago, IL: Open Court. Schneider, A. 1998. Insubordination and Intimidation Signal the End of Decorum in Many Classrooms. The Chronicle of Higher Education. Wyatt, G. 1992. Skipping class: An Analysis of Absenteeism Among First-year Students. Teaching Students 20:201-207. 185 186 ASSESSING THE ASSESSMENT DECADE: WHY A GAP BETWEEN THEORY AND PRACTICE FUELS FACULTY CRITICISM Michael J. Strada Professor of Political Science, West Liberty State College Visiting Professor, WV University; Co-Director, FACDIS Consortium The North Central Association of Colleges and Schools has stated concisely that “Programs to assess student learning should emerge from, and be sustained by, a faculty and administrative commitment to excellent teaching and learning” (NCA, 2000, p. 32). But excellence seems to represent a moving target. As Winona State University Assessment Director Susan Hatfield (1996) points out, the validation of excellence in higher education has shifted from an earlier emphasis on inputs and processes to a more recent focus on outcomes. This fundamental change, I believe, is only one of several imbalances in the practice of assessment that cry out for equilibrium. A highly-respected treatise on assessment concludes with an epilogue entitled, “A Matter of Choices.” The authors write that assessment can be conducted in various legitimate ways. “As such, the process of planning and implementing assessment programs requires many choices,” between philosophically different alternatives. However, these pairs of alternatives need not be seen as mutually exclusive; in fact, they should complement each other in striking “a balance that works” (Palomba and Banta, 1999, p. 331). They discuss three critical sets of choices that institutions must face in quest of balanced assessment: • • • improvement versus accountability as motivations for assessment; quantitative versus qualitative means of assessment; course-based versus non-course models of assessment. Half of my time is spent as a Professor of Political Science at West Liberty State College, where I have served for three years as Co-Chair of our College Assessment Committee, exposing me to many of the asymmetries found in assessment practices. My instincts as an instructor tell me that Palomba and Banta are right when they support equilibrium, or homeostasis, as desirable concerning the choices cited above. I would go even further, and suggest that when gross imbalances exist, they belie something akin to pathology in academe. When I look around at current practices at my home institution, at the other institutions in West Virginia, and nationally (as recounted in books, a major research survey, and journals like Change, Research in Higher Education and Assessment Update), I see a system rife with disequilibrium concerning these three vital issues. That is, a system motivated more by accountability than desired improvement, employing quantitative techniques far in excess of qualitative ones, and conceptualized chiefly as non-course-based. What troubles me about the status quo is that it symbolizes a jarring disconnect between: (1) the inclusive theory of assessment; and (2) the equally exclusive 187 practice of assessment. The American Association for Higher Education’s first principle of good practice says that “the Assessment of student learning begins with educational values” (AAHE, 1989, p.2), and my educational values tell me that these imbalances are unhealthy The assessment movement practically owned the decade of the nineties in higher education. However, the “assessment of assessment” undertaken in a recent survey of 1,393 institutions, conducted by the National Center for Postsecondary Improvement, or NCPI, chronicles decidedly unimpressive results (Peterson and Augustine, 1999). As the first major study asking exactly what institutions do with the extensive data that previous studies say are being gathered on our campuses, the NCPI authors want to know if assessment data is used profitably, because the assessment literature itself posits that student assessment should not become an end in itself, but rather, serve as a means to improve education. The NCPI’s baseline conclusion is that “student assessment has only a marginal influence on academic decision-making” (Peterson and Augustine, 1999, p. 21). Among the many valid questions raised by this research are descriptive and prescriptive ones about the nature of the faculty role in gathering and using assessment data. Key Institutional Researchers trumpet the axiom that assessment works best when faculty-driven, and Palomba and Banta underscore the point when they posit that “faculty members’ voices are absolutely essential in framing the questions and areas of inquiry that are at the heart of assessment” (1999, p. 10); but current practice almost seems to mock this proposition. Another prestigious group of authors asserts that “it is fact that most faculty still have not considered the assessment of student outcomes seriously” (Banta, Lund, Black, & Oblander, 1996, p. xvii). The 1999 NCPI study (Peterson and Augustine, 1999) concurs, reporting that only 24 percent of institutions say faculty members involved in governance are very supportive of assessment activities. An earlier Middle States Association survey (MSA, 1996) found that fear of the unknown, plus heavy workloads, contribute to pervasive faculty resistance to assessment. I agree that unreflective inertia on the part of some professors represents a genuine problem for assessment, but faculty reluctance to change explains only part of the problem. Even if every instructor in America reads Spencer Johnson’s (1998) best-selling parable depicting humanity’s penchant for fearing the unknown, then meditates on its insights (change happens, anticipate change, monitor change, adapt to change quickly, change, enjoy change, be ready to change again), widespread faculty support for assessment will not suddenly materialize. Many professors actively engaged in assessment have expressed thoughtful criticisms regarding the current modus operandi. In particular, instructors lack confidence in assessment’s relevance (applicability to classroom teaching and learning), validity (truly measuring learning outcomes), proportionality (institutional benefits of assessment commensurate with effort devoted to it), and significance (answering the question that comes naturally to academics: So what?) Addressing these issues is essential for the movement’s goal of an assessment culture developing on-campus. Based on my own 188 experience I would hypothesize that many faculty, though involved in assessment, have failed to prioritize it above competing agendas. And what results from relegating assessment to such second-class citizenship? Deferring initiative for assessment to administratively-oriented professionals who typically are not teachers. For those professors truly infected by the virus of skepticism, one antidote consists of a large dose of qualitative methods, or soft data. Assessment’s practitioners have clung to quantification like David Letterman to velcro, a syndrome critics call the data lust fallacy. The 1999 NCPI national survey found that the norm consists of institutions using “easily quantifiable indicators of student progress and making only limited use of innovative qualitative methods” (Marchese, 1999, p. 54). Yet, it strikes me as naive for institutional researchers to expect over-reliance on empiricism to capture the hearts and minds of dubious instructors. One pair of advocates for greater reliance on qualitative assessment argues that a pervasive myth needs to be disputed. This myth assumes that, since qualitative methods communicate in words rather than numbers, they are not as rigorous. The authors contend, however, that “These methods, when applied with precision, take more time, greater resources, and certainly as much analytical ability as quantitative measures” (Upcraft and Schuh, 1996, p. 52). Another observer notes that the flexibility of qualitative techniques allows them to operate in a more natural setting and “permit the evaluator to study selected issues in depth and detail” (Patton, 1990, pp.12-13). A sub-text reason why assessment has featured quantification may be that numbers are more easily processed by state legislators and external governors–those powerful individuals vigorously applying pressure for institutional accountability. Once the cod-liver-oil of soft data helps to balance the campus assessment cocktail, my second antidote for the virus of skepticism infecting some faculty is an equally healthy dose of course-related process and content. Put simply: process relates to the heuristic “how” of teaching and learning; content refers to the heuristic “what” of teaching and learning. These issues embrace what faculty know and care about, and they are also expressed in language congenial to the professoriate. The standard approach of using standardized tests to measure student outcomes in areas such as math, writing skills, critical thinking, and computer literacy is useful, but insufficient. Free-standing outcomes testing entails a feedback loop back to the classroom that is too amorphous. Practitioners relying on outcomes testing exclusively exhibit something of the myopia lampooned by Plato in his “Allegory of the Cave.” Plato’s mythic prisoner, chained in a manner allowing him to see only the shadows of life on the cave wall–not life itself– parallels those willing to settle for the shadows of the educational process, as opposed to education itself. The 1999 NCPI research supports this line of reasoning, finding that “relatively few links exist” between measures of student assessment and the faculty’s classroom responsibilities. Germane to this gap is Palomba and Banta’s assertion that “integrating assessment activities into the classroom and drawing on what faculty are already doing increases faculty involvement” (1999, p. 65). Emulating best-practices 189 rather than worst-practices is essential, and an NCA Assessment Consultant recently praised Winona State University for the clever incentives devised there to foster faculty participation in assessment activities (Lopez, 2000). Not coincidentally, the half-time director of assessment at Winona State, Susan Hatfield, spends the other half of her time teaching in the Communications Department. Therefore, pedagogical process and content pertinent to the faculty mind-set ought to be blended liberally into the assessment mix. But too seldom does this happen. A wellknown advocate of Classroom Assessment Techniques (CAT) contends that the oneminute paper (now used in over 400 courses at Harvard) provides valuable feedback from student to instructor, quickly and efficiently, making it an example of CAT worth emulating (Cross, 1998). One program steeped in CAT operates at Raymond Walters College (University of Cincinnati), and uses the course grading process for both departmental and general education assessment. Notably, the mind behind assessment at Raymond Walters is a chemistry professor, Janice Denton, who splits her time between the classroom and administering assessment. Her consultancy at my home institution impressed me as replete with creative ideas. However, a meaningful spillover effect at this institution eludes detection. My sense is that the key players (Department Chairs) accept many of Denton’s ideas, but don’t know how to apply the concepts to their own bailiwick. I believe that the rigorous course syllabus can provide concrete hooks to grounds assessment in the classroom experience that Department Chairs understand and value, thus I have begun conducting seminars there on the relationship between model syllabi and assessment. The other half of my time is spent at West Virginia University, serving as CoDirector of a statewide international studies consortium (FACDIS), which includes all 20 of West Virginia’s public and private institutions. This role has given me an appreciation for the potency of improved course syllabi to enhance both faculty and course development. For two decades, FACDIS has relied on improving course syllabi as its principal means of holding faculty accountable. The consortium involves 375 faculty from more than 15 disciplines in projects funded by a combination of state funds and $1.5 million from competitive external grants. FACDIS has received two prestigious national awards in the process. The vital resource of an exemplary course syllabus can link assessment to the classroom, and it can also generate innovative soft data germane to pedagogical process and content. A recent article develops the case for more sophisticated course syllabi (Strada, 2000). Just as the last thing a fish would notice is water, academics tend to overlook the value of a comprehensive course syllabus. It seems too prosaic for some higher education professionals to take seriously. But despite operating largely in obscurity, a nascent body of literature appreciative of the syllabus’ diverse contributions is beginning to emerge (Altman and Cashin, 1992; Birdsall, 1989; Grunert, 1997). The only book-length treatment of syllabi considers course content, course structure, mutual obligations, and procedural information as basic necessities, but advocates a truly “reflective exercise” serious enough to improve courses by clarifying hidden beliefs and 190 assumptions as part of a well-developed philosophical rationale for the course (Grunert, 1997). Ideally, I look for part of a professor’s academic soul to shine through the pages of a thoughtful syllabus. The potential benefits of creating more complex syllabi fall into three categories. First and foremost, good syllabi enable student learning by improving the way courses are taught. This benefit seems transparent to veteran instructors who have worked to improve a syllabus and know how it adds efficiency to organizing the course, saves time in future semesters, and establishes a paper trail to highlight the good things they already do in the classroom. Such intuitive insights are bolstered by a study examining commonalities found among Carnegie Professors of the Year recognized by the Council for Advancement and Support of Education (CASE). University of Georgia Management Professor John Lough spawned the idea of dissecting the behavior of CASE Professors of the Year to see what makes them tick--a form of best-practices benchmarking. The universal common denominator cited by Lough is that “Their syllabi are written with rather detailed precision. Clearly stated course objectives and requirements are a hallmark. They employ a precise, day-by-day schedule showing specific reading assignments as well all other significant requirements and due dates” (Lough, in Roth, Ed., 1996, p. 196). Closely related to energizing teaching and learning is a second benefit of sophisticated syllabi that remains more opaque to academic eyes: use in faculty evaluation. A recent book purporting to explain every aspect of exercising the duties of a Department Chair, fails to include the word syllabus in its index, nor could I locate the word syllabus in the book’s 279 pages (Leaming, 1998). An elegant syllabus includes lesson plans that provide the only true road map of what is really being taught, and, how it is being taught, in that course. The concept of a lesson plan is dismissed too summarily by higher education faculty and administrators as pertinent only to secondary schools (therefore beneath us). Yet, my experience tells me that lesson plans help to establish an upward course trajectory from semester to semester because the process is a cumulative one: you no longer backslide by forgetting something effective that you did five years ago, or, by failing to ground a trial balloon that didn’t fly last time out. In the one course that I teach every semester, I revise lesson plans immediately after class. In this way, a lesson plan evolves in ways analogous to the process of pecking away at a script. Precise lesson plans also represent something of a pedagogical insurance policy for institutions with aging faculty. For example, at my home institution, a majority of professors in the School of Liberal Arts are older than 55. If illness strikes, good lesson plans would help to protect the academic integrity of what transpires in the professor’s absence. Because the comprehensive syllabus and its lesson plans are under-appreciated, it is not surprising that academic administrators rarely grasp the syllabus’ pertinence to promotion and tenure decisions. 191 Completely absent from the assessment script is any hint that the exemplary course syllabus is a player on the academic stage. This is unfortunate, because a fine syllabus contains what is tantamount to the DNA code for an endangered species: qualitative assessment that is creative and relevant to curricula. Curricular structures matter, and the solid planning endemic to worthy syllabi yields dividends that can help to bolster curricular integrity. Even more importantly, dense syllabi allow us to forge substantive links between the three curricular levels of the academy which researcher Robert Diamond says currently proceed in random directions: individual courses, programs of study at the departmental level, and general education programs at the institutional level. The disconcerting result, claims Diamond, is that most free-wheeling curricula “do not produce the results that we intend”(1998, p. 2). Another higher education analyst similarly bemoans the curricular randomness noted above, suggesting that “institutions tend to frame policies at the global level, leaving the specifics of learning to disciplines comprised of single courses, and those disciplines seldom have the necessary resources” (Donald, 1997, p. 169). Linking these curricular levels in meaningful ways can only occur by holding faculty accountable, but doing so without violating their academic freedom–which is sure to happen once you tell them what they should teach (content), or, how they should teach it (process). Only sophisticated syllabi provide detailed and accurate snapshots of how content and process come to life in the classroom. Only thoughtful syllabi afford instructors the breathing space to reveal their pedagogical essence, thus facilitating scrutiny, but without rigid or heavy-handed directives. Only serious syllabi provide extensive soft data to augment the hard data typically generated to satisfy demands for curricular accountability emanating from oversight bodies. I am passionate about the virtues of solid syllabi because I have seen them bear fruit: in the efforts of the FACDIS consortium, and in my own classroom. However, while sophisticated course syllabi can be used for either faculty evaluation or assessment purposes, it is a cardinal assessment principle that these two processes should function separately at any given institution, to avoid the possibility of conflict of interest between assessment and faculty evaluation. Assessment professionals can facilitate the course syllabus emerging as the fulcrum linking the three levels of the academy. In order to do so, they would benefit from insights gleaned from educational psychologist Robert Sternberg (1995). He attacks standardized testing (typically used in higher education assessment) for its failure to incorporate the vital element of creativity. Thirty-one years as a teaching professor in higher education have convinced me that the value of creativity in solving academia’s problems remains ill-appreciated. Academics seem to have big left-brains, but small right-brains; the academy loves science, but mistrusts experiential insight. Consequently, higher education tends to undervalue creativity. A counterpoint to this tendency materialized recently when the President of my home institution, Ronald Zaccari, received the American Association of University Administrators’ Eileen Tosney Award, given annually for “administrative innovation.” The AAUA noted Zaccari’s work in art, especially sculpture, as contributing 192 to his innovative efforts. In 2000, he presented a keynote address to the Association of Institutional Researchers, challenging IR people to think more creatively. In seconding this motion, I recommend balancing assessment with more soft data, concern for improvement of instruction, and the creation of course-based efforts. Fortunately, the sophisticated course syllabus can be employed to realize each of these worthy ends more comprehensively than the portfolios and capstone courses usually cited in the literature as examples of creative assessment. In conclusion, the Institutional Research literature’s best-case scenario–that assessment efforts be faculty-driven–makes good abstract sense. However, in the real world of widespread faculty skepticism about assessment, wisdom counsels that IR professionals nurture faculty support more creatively; preferably where they live–in and around the classroom. The common polemical cement housing both administrators and faculty is still damp enough to preclude predicting the future with any certainty. Four plausible scenarios still seem capable of materializing during the next decade: 1) assessment as faculty-driven; 2) assessment as faculty-supported; 3) assessment as faculty-tolerated; 4) assessment as faculty-denigrated. In my view, the first option represents as ideal type that will occur rarely under special circumstances. The second option is certainly feasible, if assessment practitioners make an effort to engage the issues of relevance, validity, proportionality, and significance that rankle the professoriate. I see the third option as a reflection of the status quo, and likely to continue unless more creative thinking is exhibited by all concerned. However, the worst-case scenario of the fourth option should not be discounted as impossible. Realistic faculty know that the age of accountability will not soon disappear, but unless assessment is constructively linked to the courses they teach, even their acquiescence cannot be taken for granted. The North Central Association’s extensive, decade-long review of assessment in 1999 concludes somberly (much like the NCPI) that “In institutions where key faculty have not claimed ownership, or participated wholeheartedly and in large numbers, they have had great difficulty in launching and developing their assessment programs” (Lopez, 1999, p. 9). The report places a great deal of emphasis on the potency of opposition by “faculty leaders” (as opposed to rank-and-file faculty) in this comprehensive NCA document. This corrosive problem of influential senior faculty speaking out against assessment is something that “institutions are reluctant to bring up in conversation or written documents,” but if not carefully defused, can become the “most persistent and deleterious” of all the obstacles to successful assessment (1999, p. 11). From my perspective, it looks like the assessment literature is unaware of another valuable resource directly relevant to this issue. If administrators are hypothetically from Mars, then many faculty in higher education are from Venus. Hailing symbolically from different planets, the chasm between these denizens of academia can be bridged creatively by those relatively few split personalities, like Janice Denton (Raymond Walters College) and Susan Hatfield (Winona State University), who hold academic rank and teach half-time while running exemplary assessment programs as their alter-ego to the classroom. As a practitioner of this 50/50 model of time-structuring for the past 21 193 years, I have labeled this dichotomy as the Lokai role (named for the character played by Larry Storch in the original Star Trek series). Lokai is black on his left side and white on his right side, exactly opposite of a rival race colored white on the left side and black on the right side. To Star Trek’s audience, of course, Lokai and his bitter enemy seem barely distinguishable–but to the protagonists–they might as well come from different planets. I understand that some risks exist for people like Janice Denton and Susan Hatfield who play a Lokai role on-campus. However, these are personal political risks. For those serendipitous institutions having individuals performing Lokai roles, the chances of making assessment operate in ways congenial to faculty values are better if they exploit this resource than if they do not. References Altman, H., & Cashin, W. (1992). Writing a syllabus. Center for Faculty Evaluation and Development, Kansas State University. Angelo, T., Ed., (1998). Classroom assessment and research: an update on uses, approaches, and research findings. San Francisco: Jossey-Bass. Astin, A. (1996). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. Portland: Oryx Press. Banta, T, Lund, J., Black, K., & Oblander, F., Eds. (1995). Assessment in practice: Putting principles to work on college campuses. San Francisco: Jossey-Bass. Braskamp, L., & Ory, J. (1994). Assessing faculty work: Enhancing individual and institutional performance. San Francisco: Jossey-Bass. Brookhart, S. (1999). The art and science of classroom assessment: The missing part of pedagogy. Washington, DC: ASHE-ERIC Higher Education Report. Cerbin, W. (1994). Connecting assessment of learning to improvement of teaching through the course portfolio. Assessment Update, 7 (1), 4-6. Chickering, A., & Gamson, Z., Eds. (1991). Applying the seven principles for good practice in undergraduate education. San Francisco: Jossey-Bass. Cross, K. P. (1998). Classroom research: Implementing the scholarship of teaching. In T. Angelo, Ed., 5-22. Diamond, R. (1998). Designing and assessing courses and curricula: A practical guide. San Francisco: Jossey-Bass. Dill, D. (2000). Is there an academic audit in your future? reforming quality assurance in higher education. Change, July/August, 35-40. 194 Donald, J., & Erlandson, G., Eds. (1997). Improving the environment for learning: Academic leaders talk about what works. San Francisco: Jossey-Bass. FLAG Website: The field-tested learning assessment guide: Go to http://www.wcer.wisc.edu/cl1/flag/ Gibbs, G., Ed. (1995). Improving student learning through assessment and evaluation. London: Oxford Centre at Oxford Brookes University. Glassick, C., Huber, M., & Maeroff, G., Eds. (1997). Scholarship assessed: Evaluation of the professoriate. San Francisco: Jossey-Bass. Grunert, J. (1997). The course syllabus: A learning-centered approach. Bolton, Mass: Anker, Hatfield, S. (1996). Guidelines for assessment. Winona, Minnesota: Winona State University. Hutchings, P., Ed. (1998). How faculty can examine their teaching to advance practice and improve student learning. Washington, DC: AAHE. Johnson, S. (1998). Who moved my cheese? New York: G.P. Putnam’s Sons. Leaming, D. (1998). Academic leadership: A practical guide to chairing the department. Bolton, Mass: Anker. Lopez, C. (1999). A decade of assessing student learning: what we have learned: what’s next? 104th Annual Meeting, North Central Commission on Institutions of Higher Education. Go to: http://www.ncacihe.org/aice/assessment/index.html ____________. (2000). The faculty role in assessment: using the levels of implementation to improve student learning. Fairmont, WV: Workshop Presentation. Lucas, A. (2000). Leading academic change: Essential roles for department chairs. San Francisco: Jossey-Bass. Marchese, T. (1999). Revolution or evolution? Gauging the impact of institutional student assessment strategies. Change, September/October, 53-58. North Central Association (NCA) of Colleges and Schools. (2000). Assessment of student academic achievement: levels of implementation. Addendum to the Handbook of Accreditation. 195 Outcomes Assessment Resources on the Web: Go to: http://www.tamu.edu/marshome/assess/oabooks.html Palomba, C., & Banta, T. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass. Peterson, M., & Augustine, C. (2000).Organizational practices enhancing the influence of student assessment information in academic decisions. Research in Higher Education, 41 (1), 21-47. Roth, J., Ed. (1996). Inspiring teaching: Carnegie professors of the year speak. Bolton, Mass: Anker. Rubin, S. (1985). Professors, students, and the syllabus. Chronicle of Higher Education. Sternberg, R., & Kolligan, J., Eds. (1990). Competence considered. New Haven: Yale University Press. Sternberg, R. (1995). Defying the crowd: Cultivating creativity in a culture of conformity. New York: Free Press. Strada, M.. (2000). The case for sophisticated course syllabi. In, To Improve the Academy. Bolton, Mass: Anker. Upcraft, M., & Schuh, J.(1996). Assessment in student affairs: A guide for practitioners. San Francisco: Jossey-Bass. Walvoord, B., & Anderson, V. (1998). Effective grading: A tool for learning and assessment. San Francisco: Jossey-Bass. Wright, B. (1997). Evaluating learning in individual courses. In J.G. Gaff, Ed., Handbook of the undergraduate curriculum: A comprehensive guide to purposes, practices, and change. San Francisco: Jossey-Bass. Wright, W., et al. Portfolio people: teaching and learning dossiers and innovation in higher education. Innovative Higher Education, 24 (2), 89-103. 196 STRUCTURAL/ORGANIZATIONAL CHARACTERISTICS OF HIGHER EDUCATION INSTITUTIONS LEADING TO STUDENT PERFORMANCE, LEARNING, AND GROWTH: A RESPONSE TO ACCOUNTABILITY AND ACCREDITATION FORCES IN TWO AND FOUR YEAR SECTORS Linda C. Strauss Director, Penn State LEAP Program/Interim Director Comprehensive Studies Program Penn State University J. Fredericks Volkwein Director and Professor Center for the Study of Higher Education Penn State University Introduction This paper examines the structural/organizational characteristics associated with positive student performance, learning, and growth at two and four-year institutions. The importance of this research is based on three external forces. First, accrediting agencies (Middle States Association of Colleges and Schools, Western Association for Schools and Colleges’ Commission for Senior Colleges and Universities, North Central Association of Colleges and Schools) have been revamping their policies to stress student learning (McMurtrie, 2000). The Council of Regional Accrediting Commissions recently drafted new standards of accreditation that include a “…focus on student learning instead of institutional preferences” (p. A58) (Carnevale, 2000). A review of the guidelines and mission statements of accrediting agencies reveals the inclusion of student outcomes as an important component of the accreditation process. This research augments these current initiatives by identifying some of the structural/organizational characteristics related to student learning. A second growing force for higher education is the emergence of performance indicators in state funding (Cabrera & La Nasa, 2000). This research demonstrates structural/organizational characteristics that address student performance, learning, and growth. Structural/organizational characteristics are measured by the size of the institution, revenues, expenditures, endowment, selectivity, complexity, and the presence of residence students. These measures are aligned with the current literature on organizational effectiveness (Hall, 1991; Lewis, 1995; Pascarella and Terenzini, 1991; Reiss, 1970; Volkwein, Valle, Blose, & Zhou, 2000). These characteristics can help state government and other funding sources identify potential performance indicators that will enhance student performance, learning, and growth. Although structural/organizational characteristics such size, wealth, complexity, and selectivity may not initially appear to be indicators, many of the indicators currently included in state performance budgeting criteria (for example, SAT scores, array of 197 academic programs and services, revenue enhancement strategies, and targeted populations) contribute to these four categories of characteristics (Burke, 1997). Finally, Cohen and Brawer (1996) identified a major gap in research between four-year institutions and two-year institutions. Compared to studies of four-year institutions, there is a relative dearth of research on the two-year sector. The proposed research will address this gap, and the articulate some of the commonalities and differences between these institutional types. This research addresses the following questions: 1. Controlling for other variables what are the structural/organizational characteristics of institutions that contribute to positive student performance, learning, and growth? 2. What are the differences between two-year and four-year institutions that most contribute to positive student performance, learning and growth for a population of fourth semester students? Conceptual Framework The Pascarella (1985) model of student outcomes provides the conceptual framework for this study. The Pascarella (1985) General Causal Model specifies five elements influencing student learning and cognitive development. These elements are structural/organizational characteristics of institutions, (size, mission, wealth, complexity, and selectivity), student background/pre-college traits (aptitude, personality, ethnicity, high school experiences), interactions with agents of socialization (faculty and peer interactions), institutional environment (classroom experiences, student services, tolerance, safety), and quality of student effort. The Pascarella model assumes that all these components contribute directly or indirectly to learning and cognitive development. The study examines on the structural/organizational characteristics of institutions controlling for factors such as the institutional environment (as perceived by the student), the interaction with agents of socialization, the quality of student effort, and pre-college traits. This allows the authors to examine the influence of specific structural/organizational characteristics associated with student learning and growth, as well as the differentiation or similarity between the two-year and four-year institutions. Method Participants This research utilized a 1997 multi-campus database drawn from 51 public (23 four year and 28 two year) institutions. There are 7,658 students in the database who completed the assessment instrument at the end of their second year. The study is limited to second year students ensuring that students have spent an equal amount of time at their respective institutions. 198 Pascarella’s (1985) General Causal Model Structural/ Organizational Characteristics of Institutions e.g. • Enrollment • Fac-Stu Ratio • Selectivity • % Residential Interactions with Agents of Socialization e.g. • Faculty • Peers Learning and Cognitive Development Institutional Environment Student Background/Pre-College Traits e.g. • Aptitude • Achievement • Personality • Aspirations • Ethnicity Quality of Student Effort 199 Materials The database contains both institutional and student level data. The institutional level data includes information on the organizational complexity, financial resources, selectivity, sector, residential component, and student demographics. The student level data includes information on pre-college characteristics, perceptions of the institutional environment, experiences of academic and social integration, financial aid, effort, and student learning and growth. Institutional measures of wealth, enrollment, and sector were obtained from the Integrated Postsecondary Educational Data System (IPEDS). Institutional complexity measures were gathered from the Directory of Higher Education (1997 edition). Procedure The institutional data was gathered from multiple sources, all for the 1997-1998 academic year. A committee of cooperating researchers and administrators from participating institutions developed the survey instrument. The instrument is grounded in the Pascarella (1985) and Cabrera, Nora, and Castaneda (1993) models of student outcomes and persistence. The Cabrera, Nora, and Castaneda (1993) model of student persistence proposes a more complex array of factors than the Pascarella (1985) model resulting in student persistence decisions. Included in the Cabrera, Nora, and Castaneda (1993) model are financial aid, pre-college academic performance, significant others encouragement, financial attitudes, academic and intellectual development, grade point average, social integration, institutional commitment, goal commitment, and the intent to persist. This model provided additional factors to be included in the database that have been demonstrated to have significant relationships with student persistence. The survey for the database was printed and scored by the American College Testing program. The database was analyzed using SPSS pc version statistical software. Measurement There are a number of variables and constructs hypothesized to be related to student performance, learning, and growth contained in the database. The present study examines the variables and constructs proposed by both the Pascarella and Cabrera models of student outcomes related to student performance, learning, and growth. Specifically, the variables included in the study include the following variables, also listed in Table 124. 24 Table 1 referenced in this paper may be obtained by contacting the authors. 200 Dependent Variables Learning and Cognitive Development For the purposes of this study, student performance, learning, and growth is taken from two perspectives: students and faculty. First, student perceptions of growth are obtained from students’ self-assessment of their own intellectual growth (acquiring information, ideas, concepts, and analytical thinking) on a five point growth scale (1= none and 5= extremely high). Second, faculty perceptions of student learning were measured by the cumulative grade point average reported by students. Independent Variables Structural/Organizational Characteristics of Institutions Key indicators for structural/organizational characteristics used in previous literature have included size, wealth, complexity, mission, and selectivity (Volkwein, Valle, Parmley, Blose, & Zhou (2000). Size is represented by the total undergraduate headcount enrollment at the institution. Mission is measured on a scale from 1 to 6, with 1 being Associate degree granting, and 6 being Professional degree granting. Wealth includes measures of revenues and expenditures per annual full time enrollment. The complexity measure reflects the number of organizational units headed by a Vice President or Dean (or equivalent) and the highest degree offered by the institution. Selectivity includes the percentage of applicants admitted. In addition to these factors, this study included the presence or absence of residential housing on campus. Pre-college Factors This study controls for student characteristics such as racial/ethnic group membership, disability, gender, previous employment, dependent children, socioeconomic background, age, SAT score, high school rank, and high school average. Interactions with Agents of Socialization This study also includes student reported variables reflecting the extent of interactions including the amount of faculty interactions (amount of direct contact with faculty, satisfaction with faculty and advisors) and the extent to which the students interacted with their peers (extent and value placed upon peer interactions). Institutional Environment Factors contributing to institutional climate include measures of classroom experiences (stimulation in class, faculty quality, classroom satisfaction), perceptions of 201 openness and tolerance (satisfaction with the atmosphere of understanding, freedom from harassment, racial harmony, understanding of lesbian/gay/bisexual issues, and security/safety), perceptions of low prejudice (by peer students, faculty, and administrators), satisfaction with various student services, and satisfaction with various academic support services and facilities. Quality of Student Effort Student effort is measured by student perception of good study habits and giving a high priority to studying. Data Analysis First, a factor analysis was conducted to see if the items clustered consistently with student outcome theory. The resulting factors were examined. Resulting scale construction is reported in table 1, and scale reliabilities for the two-year sector, the fouryear sector, and the combined sample are reported in Table 2. Table 2. 2 YEAR FACULTY INTERACTION PEER INTERACTION INVOLVEMENT LOW PREJUDICE OPEN TOLERANCE HEALTH SERVICE REGISTRATION AND BILLING CLASSROOM EXPERIENCE STUDENT EFFORT GROWTH 4 YEAR 2 AND 4 YEAR 0.81 0.74 0.79 0.85 0.76 0.92 0.76 0.87 0.87 0.74 0.88 0.71 0.83 0.86 0.75 0.91 0.73 0.79 0.79 0.68 0.74 0.89 0.80 0.88 0.88 0.79 0.87 0.89 0.79 0.88 The principle method of analysis is the use of OLS regression equations to predict the dependent variables, grade point average and student growth. Separate regression equations for the dependent variables are run for the two-year, four-year, and combined populations. Results Controlling for other variables what are the structural/organizational characteristics of institutions that contribute to positive student performance, learning, and growth? Tables 2 and 3 display the regression beta weights for each of 202 the three populations with GPA as the dependent variable in table 2 and student selfreported growth as the dependent variable in table 3. In each case, the variables were entered in blocks consistent with the Pascarella model (pre-college variables first, structural/organizational variables second, interactions with agents of socialization third, institutional environment and effort fourth). The results indicate that the structural/organizational characteristics of mission, complexity, and residential percentages do contribute to student performance, learning and growth. Specifically, for both the four-year sector and the combined populations, the higher the degree offered by the institution, the lower the gpa of its students. The more wealth an institution has, the higher the GPA for students at four-year institutions, and the more complex a four-year institution is, the higher the students’ performance and learning. Finally, the more students live off campus in the combined sample, the better their grade point average. In terms of growth, the higher the degrees offered by the institution, the more growth the students reported experiencing. What are the differences between two-year and four-year institutions that most contribute to positive student performance, learning and growth for a population of fourth semester students? Difference do exist in this sample between the structural/organizational characteristics contributing to positive performance and learning between the two and four-year sectors. While none of the structural/organizational characteristics were related to student performance and growth at two-year institutions, four-year institutions demonstrated significant relationships between their mission, wealth, and complexity and performance and learning. Specifically, students at four-year institutions that offered lower degrees experienced greater learning and performance. Additionally, four-year institutions with greater wealth had students with higher reported learning and performance. Finally, students at four-year institutions with greater complexity reported higher learning and performance than students at four-year institutions with less complexity. 203 Table 3. DEPENDENT VARIABLE: COLLEGE G.P.A. PRE-COLLEGE VARIABLES STRUCTURAL/ORGANIZATIONAL 2 YEAR 4 YEAR 2 & 4 YEAR N=5082 N=2576 N=7658 Beta Beta Beta RACIAL-ETHNIC GROUP 0.079* TOTAL SAT 0.230*** 0.275*** 0.250*** HSRANK 0.317*** 0.258*** 0.288*** TOTAL R-SQUARED 0.264 0.171 0.183 MISSION '-0.345*** WEALTH 0.111** COMPLEX 0.244* COLLEGE RESIDENCE TOTAL R-SQUARED AGENTS OF SOCIALIZATION INSTITUTIONAL ENVIRONMENT 0.277 0.306 CLASSROOM EFFORT *p<.05 204 0.215 0.226 '-0.096** '-0.057* 0.255 0.257 0.123** 0.102** 0.237*** 0.220*** 0.333 0.300 0.294 **p<.01 ***p<.001 0.193*** TOTAL R-SQUARED 0.139* 0.100** PEER TOTAL R-SQUARED '-0.266*** Table 4. DEPENDENT VARIABLE: STUDENT GROWTH PRE-COLLEGE VARIABLES 2 YEAR 4 YEAR 2 & 4 YEAR N=5082 N=2576 N=7658 Beta Beta Beta GENDER 0.080*** AGE -0.045* TOTAL SAT -0.083* STUDENT WITH DISABILITY AGENTS OF SOCIALIZATION INSTITUTIONAL ENVIRONMENT -0.065* '-0.085** -0.046* 0.062 0.063 0.080 TOTAL R-SQUARED 0.067 0.092 0.099 PEER 0.192*** 0.254*** 0.237*** TOTAL R-SQUARED 0.509 0.461 0.490 INVOLVM 0.169*** 0.109*** 0.134*** REGBILL 0.080* CLASSROOM 0.440*** 0.399*** 0.512 0.462 **p<.01 ***p<.001 TOTAL R-SQUARED STRUCTURAL/ORGANIZATIONAL 0.071*** MISSION 0.140** 0.048* EFFORT 0.406*** 0.048* TOTAL R-SQUARED *p<.05 0.492 Summary, Conclusions, and Significance The results of this study demonstrate that structural/organizational differences do influence student’s performance, learning, and growth. While it is extremely importance to keep focusing on the academic preparation, interactions with agents of socialization, institutional environment, and student effort to influence student performance, learning and growth, accreditation agencies, state governments, and institutions themselves should pay attention to the issues of mission, complexity, residence component, and wealth. Equally important is the demonstration that two and four-year sector institutions are not the same when it comes to predicting student performance, learning, and growth, and hence should not governed, evaluated, or monitored according the same standards. The results of this study indicate that accreditation, funding, and governing bodies should 205 examine two-year and four-year institutions separately, and create separate criteria for the assessment of the two sectors. Specifically, for the two-year sector, student pre-college characteristics predict approximately one-half of the R2 variance for grade point average. This indicates that what the students bring to the two-year institutional environment has tremendous implications for their subsequent performance and evaluation. In contrast, the four-year sector had a greater variety of influences on grade point average. Pre-college characteristics, institutional type, student effort, and classroom environment all contributed substantially to grade point average. In reference to student growth, the profiles between the two sectors were more similar than the profiles between the two sectors for grade point average. Classroom environment appears to be much more important in predicting growth than any other variable. This finding supports recent research (Volkwein, Valle, Blose, & Zhou, 2000) that the classroom experience is a critical variable in student outcomes. This research can contribute to the current discourse regarding the transition of accrediting agencies to a more student learning centered perspective. The significant results indicate that some institutional factors can contribute to student performance, learning, and growth, potentially influencing the criteria used in accreditation processes. Second, the study contributes to the continuing issue of performance indicators in higher education. The key institutional factors associated with increased effectiveness of student performance, learning, and growth, could serve as performance indicators for use by state governments for funding initiatives. Because much research conducted on student outcomes fails to examine the two-year sector, or compare the two vs. the fouryear sectors, much of the rich information is overlooked. This information is critical when creating performance indicators. The difference between the regression outcomes for the two and four year sectors in this study indicates that when performance indicators are established, the different institutional sectors should be taken into consideration. Third, the study provides a critical comparison of student outcomes in the two and four year sectors. Little research has directly compared the effectiveness of two and four year institutions and the factors that comprise such effectiveness. The present study identifies those factors for each sector and compares them, demonstrating that differences between the two sectors do indeed exist. Limitations of the Study Generalizability of the results of the study may be limited due to single state, public institutions participating in the study. Additionally, the results are limited to the population of second year students included in the study for analysis. These second year students represent only those students who have successfully persisted at their respective institutions. Results from this study may not be generalizable to students who do not 206 persist through their second year. This persistence may also be related to institutional type (i.e. two vs. four-year institutions). Although using grade point average has become accepted as a measure of student learning, it may not be the best indicator possible (Pascarella, 1985). Hence, the results are limited to the belief that grade point average is an adequate proxy for student learning. Third, the database does not include items related to the degree of sophistication of institutional technology, a structural/organizational characteristic that may be related to student learning and growth. References Burke, J. C. (1997). Performance funding indicators: concerns, values, and models for two and four-year colleges and universities. Albany, New York: The Nelson A. Rockefeller Institute of Government. Cabrera, A. F. & La Nasa, S. M. (2000). On college teaching methods and their effects: ten lessons learned. Ill Journadas de Intercambio de Experiencias de Mejora en la Universidad Gabinete de Estudios y Evaluacion Universidad de Valladolid. Espana Valladolid, Junio 21-23, 2000. Cabrera, A. F., Nora, A., & Castaneda, M. B. (1993). The role of finances in the persistence process: A structural model. Research in Higher Education, 33(5), 571-593. Carnevale, D. (2000). Accrediting bodies consider new standards for distanceeducation programs. The Chronicle of Higher Education, xlvii(2), A58-A59. Hall, R. H. (1991). Organizations: Structure and Process. Englewood Cliffs: Prentice-Hall. Lewis, M. V. (1995). Student Outcomes at Private Accredited Career Schools and Colleges of Technology: An Analysis of the Effects of Selected School/College Characteristics on Student Outcomes for School Years 1990 Through 1993. Columbus, Ohio: Center on Education and Training for Employment. The Ohio State University. (Eric Document Reproduction Service No. ED 379 492) McMurtrie, B. (2000). Accreditors revamp policies to stress student learning. The Chronicle of Higher Education, A29-A31. Middle States Association of Colleges and Schools. (1994). Characteristics of Excellence in Higher Education: Standards for Accreditation. (On-line) Available: www.msache.org. 207 North Central Association of Colleges and Schools. (2000). Shaping the Commission’s Future: Mission Statement 2000. (On-line) Available: www.ncacihe.org. Pascarella, E. (1985). College environmental influences on learning and cognitive development: A critical review and synthesis. In J. Smart (Ed.), Higher Education: Handbook of Theory and Research, 1, New York: Agathon. Pascarella, E. & Terenzini, P. T. (1991). How College Affects Students. San Francisco: Jossey Bass. Reiss, W. (1970). Organizational Complexity: the Relationship between the size of the administrative component and school system size. (Technical Report No. 10). Eugene, Oregon: University of Oregon, Center for the Advanced Study of Educational Administration. Volkwein, J. F., Valle, S., Blose, G. & Zhou, Y. (2000). A Multi-Campus Study of Academic Performance and Cognitive Growth among Native Freshman, Two-year Transfers, and Four-year Transfers. Paper presented at the meeting of the Association for Institutional Research Forum, Cincinnati, OH. 208 USING QUALITATIVE ANALYTICAL METHODS FOR INSTITUTIONAL RESEARCH Carol Trosset Director of Institutional Research Grinnell College Introduction My early research training was in the natural sciences, primarily observational field biology and animal behavior. Then while I was a student at Carleton, I began studying cultural anthropology, and I became an ethnographer. There aren’t many ethnographers working in institutional research, which seems odd, since what ethnographers do is study communities. I do more anthropology as an institutional researcher than I did in my seven years as a faculty member, so today I’ll try to show you what that contributes to the sort of institutional research that I do. What ethnographers do, specifically, is spend years in an initially unfamiliar community gathering masses of apparently unrelated information, most of which is qualitative (such as how people behave at public gatherings, or what they say in casual conversations on the street). Over time, you try to piece all these things together to build an insightful picture of how that society works, how the people in it think, and what things they value. Obviously, qualitative analytical skills are central to this effort. Qualitative Research as a Process This sort of research is an inductive process. That is, you don’t set out to test a theory. Instead, you start with masses of information, and theories and answers emerge from it. The goal is usually to build what is sometimes called a “grounded theory,” which simply means that it emerged from the evidence rather than by being derived from a pre-existing theory. While anthropologists often use pre-existing theories to make sense of their surroundings, it’s very important to work inductively as well, to guard against becoming too enamored of a particular conceptual approach. Since social data are usually very complex, one good rule is start with complex data and initially look for patterns in the absence of a theory. This is important because you usually can’t be sure (especially in an unfamiliar culture) which variables are going to be related to the thing you think you’re interested in. Many anthropologists have stories about setting out to study one thing, only to be told by the local people that this meant they needed to understand something that seemed unrelated. One of my professors went to New Guinea to study emotion, and was forced by his hosts to learn all about birds, which did turn out to be central to the issue at hand. So one thing ethnographers learn is that you can’t know ahead of time what factors are related to each other. 209 Another key thing about ethnographic work is that you’re often trying to find out things that people can’t articulate consciously. This means that you can’t find out what you want to know by asking direct questions. Some of the technique lies in knowing what indirect questions to ask, or what situations to observe, but the rest of it is hidden in the analysis. This may all sound very subjective, but there are describable techniques, and there are good ways of testing the plausibility of the results. Here’s an example from my first research task at Grinnell. I was initially hired by Grinnell’s then-president as a consultant to study why Grinnell students felt they couldn’t talk about diversity issues. I and my student assistants did a lot of interviews. We compiled a list of issues the students thought it was hard to talk about, such as “whether race is an important difference between people.” For each issue, students were asked whether or not they wanted to have a balanced discussion about that subject. Then they were asked why or why not. At first, I thought this was a failed interview design, because so many people misunderstood the question and said no, they didn’t want to talk (“have a balanced discussion”) because a discussion of that issue wouldn’t be balanced. This meant that we couldn’t get a count of how many people thought they wanted to have a balanced discussion, because they hadn’t answered the question. But later I went back and assembled all the answers people had given to the “why or why not” part of the question. I think I was just being thorough because a few of the responses had looked kind of interesting. But when I assembled them all (well over 100 comments), I found I had overwhelming evidence answering a question I hadn’t thought to ask, and which I couldn’t have asked directly anyway: What did students think discussion was for? Here are a few representative comments from these interviews: • “I want to discuss the importance of sex differences, because I have strong opinions.” • “I might discuss discomfort with homosexuality depending on the company. If they were persuadable, I would want to convince them.” • “I want to discuss causes of sexual orientation because I have strong views on this issue.” • “I want to discuss religion because I have a unique perspective I like to express.” • “I want to discuss the place of religion in society because I have a strong opinion.” • “I am not likely to want to discuss the importance of sex differences, but occasionally someone needs to be argued with.” • “I want to discuss affirmative action because I want to educate people.” • “Ideally, you should talk in order to make the other person realize that what they said was wrong.” • “You should talk in order to reform others to your views.” Though each person talked about different issues and used different words to explain themselves, there was an amazingly consistent underlying theme. When students wanted to discuss something, it was because they held strong views and wanted to convince 210 others. When they didn’t want to discuss, it was because they didn’t know much about an issue or didn’t have an opinion. Clearly, discussion was seen as a form of advocacy. I went on to identify different dimensions of this assumption. One variant takes the form of “The answer is obvious.” • “I don’t want to discuss race because it’s not an important difference between people.” • “I am closed-minded on the importance of race—race shouldn’t distinguish between people.” • “I don’t want to discuss sexual orientation because it doesn’t really matter.” • “I don’t want to discuss causes of sexual orientation because this topic is irrelevant to the nature of homosexuality.” • “Biological sex has little relevance, there are no major differences, so I would like to hear other views (on their importance).” • “I want to discuss affirmative action because I want to educate people.” • “Affirmative action is a yes or no issue, which makes it difficult for discussion to be fair and balanced.” Another version goes “I don’t want to talk about things I’m unsure of.” • “I would want to discuss multicultural education and affirmative action if I were more knowledgeable.” • “I’m not sure what multiculturalism is; I don’t know much about it, so I don’t want to discuss it.” • “I don’t want to talk about multicultural education, because I don’t know what it means or what the point is, and therefore I feel uneducated.” • “I want to discuss politics as long as I know what I’m talking about.” • “I would like to discuss politics if I am knowledgeable about the topic.” • “I don’t want to discuss politics because I don’t have a stand on these issues.” • “I like discussing gender issues because I feel knowledgeable about them.” • “I don’t want to discuss affirmative action because I am not familiar with the subject.” • “I don’t want to discuss affirmative action because I know absolutely nothing about it.” • “I don’t want to talk about things I’m unsure of.” I also found five whole comments, out of about 200, that assumed a different view of discussion, as a form of exploration. • “I want to talk about multicultural education because I’m not sure I know enough about it.” • “I want to discuss multicultural education, as I would like more experience on what this would involve. I believe in a broad range of experience.” • “I want to discuss race, as it would open my mind to things I don’t experience myself.” 211 • • “I want to discuss multicultural education because I’m curious to see where I stand in relation to others.” “I want to discuss multicultural education because it interests me.” This is a good example of the inductive nature of this type of analysis. I didn’t even know what question I was going to answer by assembling the data. I had never thought about what Grinnell students thought discussion was for. Once I saw what they thought, it jumped out at me because it was different from what I thought discussion was for (namely, exploration; presumably a more common view among intellectuals). Now, this is why ethnographers should come from another culture, so that they will notice things that are locally obvious. Obviously, at Grinnell I’m not from another culture, but I don’t share all the local assumptions, and that’s frequently helpful to me in noticing things others take for granted. How do I know that I’m right about the students’ view of discussion? There are three things that contribute to my certainty. First is the fact that I was totally surprised by my own findings. I’d never considered the issue and I was astonished that they could think such a thing, so I know, at least, that there was no bias internal to myself “trying” to find out what I did. Second was the reaction of the other faculty when I reported my findings. They all had a sort of “aha” experience. What I said resonated with their own local knowledge. They said things like “that’s what’s been going on in my classroom!” They hadn’t been able to articulate it for themselves, but once I did that for them, my conclusions seemed immediately obvious and explained much of their own experience. Though the absence of this reaction is not always proof that an ethnographer is wrong, when you get a strong “aha” reaction it’s always a good sign. And when I presented my findings at conferences I got similar reactions from professors at other colleges. Third, I did follow-up research using both interviews and surveys, and these studies consistently confirmed my initial theories. The kind of analysis I most enjoy doing is often most useful in the earlier stages of studying a complex issue. When and How to Gather Qualitative Data Since most institutional researchers never get the chance to spend as much time on one project as I spent on my study of student discussion, let’s look at when to use these methods on smaller-scale projects. Most of the time, I do this kind of thing when working with either interview data or survey comments. Personally, I vastly prefer interviews to surveys, because you can exercise so much more control over whether the person really answers the question you meant to ask. Interviews are especially useful under certain conditions, including: • When you don’t quite know what you want to know • When you’re investigating something complex and aren’t sure what questions to ask 212 • When you’re trying to study people’s assumptions which they may not be able to articulate • When you want to do a survey and are trying to make sure you ask the most useful questions Now that Grinnell has gotten used to having an interviewer in the IR office, I get interview assignments on a fairly regular basis. They’re time-consuming, but can yield a great deal more information than a survey. In one such project, I was asked to interview the most recent three years of tenuretrack faculty hires, to find out how they had made the decision to accept Grinnell’s job offer. It had occurred to the dean that we always knew someone’s reason for turning down an offer, because they told us on the phone when they called to decline, but we never heard the reasons why people accepted us. I think that year several of our first choices had turned us down, so he sent me out to learn why people accept. Instead of just asking them that one question, I got each one to tell me the story of their job search, about their other interviews, and what they saw as the pros and cons of coming to Grinnell. From this I built up a profile of how many had turned down other offers, how many had only applied to Grinnell, how many had already taught elsewhere, and what were seen as the most common draws and drawbacks. The analysis was pretty simple, since all I had to do was count how many people mentioned each thing, but doing it through interviews built up a pretty detailed picture of how people went through the process and what issues they were still struggling with. Another study required me to interview all the senior humanities majors who had taken fewer than three science courses during their time at Grinnell. We have no distribution requirements, but most of our students would meet fairly basic ones if we had them. So I was sent to talk to the ones who didn’t and find out why not. One of our deans had a theory that the only ones who didn’t take science had a good reason, either a learning disability or that they took science courses elsewhere in the summers. I was able to show that this was not at all true. I documented all the misconceptions these students held about the nature of the sciences, and found some places where the advising system was not working as expected. I also was able to confirm the widespread faculty impression that some students do select Grinnell because they know we won’t make them ever take another math course. An aside about focus groups, since they are another popular way of gathering qualitative data. In the right hands, they can be very effective. I personally don’t use them much, partly because they’re far more complicated than they look. They may seem like an efficient way to interview a bunch of people at once, and most of the analytical process is very similar, but there are enormous complications. This is because the members of a focus group are reacting to each other, and it takes great skill to separate this out from the rest of what you find. Here are some times when I think interviews are better than focus groups: • When you have a sensitive topic and people might be reluctant to speak in front of others 213 • • • When you don’t want people to influence each other’s responses (make sure all responses are independent of each other) When you’re not sure how to group people in a way that permits effective discussion When you personally are better at paying close attention to one person at a time Surveys are, of course, the most common way to gather qualitative data. Most people know to leave “white space” on surveys to invite comments, but most people also have little experience in how to analyze the things people write in those spaces. Many times, the comments get typed up in a list and included as an appendix to a quantitative report. This is certainly better than nothing, but there’s much more that can be done. Our student affairs office now frequently sends me comments to analyze when they do a big survey. Sometimes I’m able to tell them things they didn’t know to ask for. Here’s an example. Grinnell has a system of Residence Life Coordinators, young adults with master’s degrees who live in the dorms. The policy is that they do not enforce rules and are not required to report illegal behavior. This system was invented so that students would be willing to ask their help when drug or alcohol problems arose. On a recent survey, students were asked whether they approved of the policy whereby RLCs did not enforce rules. I could have simply added up the yesses and nos and reported that 90% of the students approved of the current policy. Since that would have been very boring, and some of the comments looked intriguing, I started analyzing the reasons people gave for not wanting RLCs to enforce rules. I ended up writing a whole report on the student concept of self-governance. Technically, student self-governance at Grinnell means that the residents of each dormitory floor make and enforce their own rules. This process is supposed to build student responsibility. However, I had already been told by students in my own classes that anarchy, not democracy, is the dominant model for self-governance. “Self” is seen as referring to each individual, not to a community of students. This information was confirmed, and greatly clarified, by the survey comments. As you can see, anarchy, not confidentiality, is the dominant rationale. Excerpt from Residence Life Survey Report: 72% of the reasons given contain the (often implicit) argument that no one should enforce rules. • We are responsible adults. (30%) Translation = “we get to make our own decisions, and no one should tell us what to do.” • It would violate self-governance to have a non-student, or anyone at all, enforcing rules. (16%) • The absence of rule-enforcers is good practice for life after college. (7%) • There aren’t many problems, so policing isn’t needed. (7%) • It feels more comfortable not to have anyone around who can punish you. (6%) • The absence of rule-enforcers is an essential feature of Grinnell. (6%) 214 I was hoping that this report would demonstrate to the student affairs staff that the dominant view of self-governance does little to build student responsibility. As you would expect, I failed to cause a revolution, but I did get a few people thinking, at least for a little while. Content Analysis as a Technique Okay, let’s get technical. What do I do with a batch of comments? One reason that qualitative analysis gets so much less respect that statistics is that most people don’t know that there are any formal techniques for doing it. Courses on how to do it are almost non-existent. I never took one. There is a formal technique known as “content analysis,” and there have been things written about it. People don’t do it exactly the same way, but the approach is consistent enough to describe it. Personally, I learned how to formulate the questions and got some practice trying to answer them in several of my undergraduate anthropology courses. Then I got lots of practice while doing field research and working on my dissertation. Finally, I refined my techniques once I had to do lots of these analyses rapidly as an institutional researcher. To illustrate how I do this, I’m going to use one of the most conceptually difficult content analyses I’ve ever performed. Two years ago, the faculty asked my office to study a trial course evaluation form to see how valid and reliable it was. After deciding that they wanted to collect comments as well as ratings, they remembered that they had a qualitative analyst in the office and asked me to analyze the comments. Here’s what I did. The point of analyzing the comments was to use them to test the validity of the students’ ratings of the instructors. That is, were high and low ratings confirmed by positive and negative comments? I wasn’t sure how to go about doing this, so first I just read lots and lots of comments, focusing on the question about the instructor. I found that it would be difficult to code whole comments as either positive or negative, since many were mixed. I also saw that, although the question asked specifically about whether the instructor had contributed to the student’s learning, many of the comments focused on other things. So I decided that I would have to classify the comments based on what aspects of the instructor the comments were about. Now I read the forms again, this time making a list of each kind of comment I found. A small sample of this list includes the following: Inspiring Dedicated to students Kind Available Clear explanations Welcomes comments Broadened my understanding Experienced 215 Organized Encouraging I kept reading until it had been quite a while since I’d found anything that wasn’t already on my list. Then I stopped reading and tried to simplify the list by combining similar comments into categories. If this were a workshop, we would not break up into groups and try this, and then argue about the merits of our various solutions. Since this is a paper instead, I’ll walk you through my own work. In the first step, I try to put synonyms together. For example, there’s no need to have both “brilliant” and “intelligent” on the list. Likewise, “dedicated to students,” “concerned with students,” and “respects students” can all be considered the same comment. In the second step, I take batches of synonyms and try to link them based on what aspect of the instructor they seem to be referring to. For example, • Kind / personable / understanding / approachable / warm, • Dedicated to students / concerned with students / respects students, • Encouraging / helpful / supportive, and • Patronizing / condescending / aggressive / threatening are all personal qualities referring to the emotional dimensions of interacting with students. Notice that positive and negative versions of an attribute belong in the same conceptual category, at least at this stage. Eventually, I got it down to a list of about ten items. At that point, I read the course evaluations a third time, and tried to “code” each professor’s student comments, so that I could look for differences between courses. Now, I told you this was a difficult one. It took me about three tries to invent a consistent way of scoring each course. In the meantime, I had to revise my list of categories a couple of times, because some of the original ones turned out not to be mutually exclusive. (You can guess I spent most of the summer on this project.) At last, I invented a reliable scoring system. In the interests of eventually finishing, I asked my colleague (who was doing the statistical analyses of the ratings) to draw me a stratified random sample of courses. I coded all those courses, and found that, regardless of the question being focused on student learning, most of the comments were about other things. • 32% were about personal attributes (nice, energetic, available) • 30% were about whether the professor was helpful • 26% were about perceived competence (knowledgeable, liked how the class was run) • 12% were about student learning (made student think, improved student’s skills) Here you can also see the categories that eventually emerged as the things our students think about when they evaluate teaching. Most individuals don’t think about all nine things, but in a typical class most or all will be mentioned at least once. • Professor availability 216 • • • • • • • • Professor niceness or approachability Professor energy or enthusiasm Appearance of professor knowledge level How well class sessions were run Whether the student liked the chosen classroom format Whether the professor helped the student understand the course material Whether the course made the student think Whether the student’s skills increased Now, the really neat part comes when you combine a qualitative analysis with a quantitative one, because then you can see what’s really going on. My original mandate was to find out if the comments and the ratings corresponded in any meaningful way. So I asked my colleague to do a cluster analysis of the numeric ratings for the courses in the sample. He found four clusters, some with different average scores. If you stopped there, you’d probably conclude that the professors with an average class rating of 4.5 are worse teachers than those in the groups with averages or 5.9 and 5.6. But look what happens when you combine the clusters with the content analysis. Having coded each course for which categories the students commented on, I was able to ask whether the pattern of comments coincided with the pattern of scores. To my surprise and delight, it did. If we were only dealing with the Typical Good Class (well run, helpful professor, good course materials, average score 5.6) and the Mixed Feelings/Ambivalent Student classes (every student says some things were good and others bad, average score 5.1), it might be reasonable to accept the ratings as a good measure of quality. However, the two outlier groups have more distinctive features, which are less convincingly linked to student learning. The high scores (average 5.9) go to the Charismatic Professors. These are the only individuals who get many personal comments, and they’re also the only ones who get credit for picking good readings. (Others get things like “most of my learning came from the readings, not from the professor.”) Students rave about the other students, and about the personal relevance of the course material. Finally, it turns out that the lowest scores (average 4.5) don’t go to the classes where everyone thought there were problems, but to classes with a bimodal distribution of comments. (Unfortunately, these did not correlate well with the standard deviation of the scores, because that is so small in every class. However, that underscores the usefulness of the qualitative analysis.) In these classes, some students raved and others were harshly critical of everything. Using my insider knowledge of professors and the curriculum, it appeared that many of these courses had content or requirements that in some way violated many students’ expectations (like using computers in an anthropology class). I also knew that, in these two groups at least, the students’ estimates often did not coincide with the respect accorded the instructors by their peers. 217 Although some professors found this information very disturbing, others have tended to ignore my findings and argue that since the numbers are all right (no statistically significant gender bias, etc.), that the forms are a valid measure of teaching quality. I continue to argue that validity is about whether the respondents really answer the question, and my data show that many of them don’t. It’s been an uphill battle, but I think I have at least gotten more people worried about what the ratings really mean, and therefore more cautious about how they want to use them. In conclusion, qualitative analysis and ethnographic methods generally, and content analysis techniques in particular, definitely have something to contribute to institutional research. They can be used to illuminate institutional culture, make sense of survey comments, and discover things no one has thought to investigate. 218 ASSESSING OUTCOMES FOR SCHOOL OF BUSINESS MAJORS USING A PRIMARY TRAIT ANALYSIS David W. Wright Associate Professor West Liberty State College Marsha V. Krotseng Vice Provost West Liberty State College Purpose This paper describes the development and implementation of a student outcomes assessment program in a School of Business Administration (SBA) at a public baccalaureate institution. Specifically, we will discuss the development of a Primary Trait Analysis (PTA) instrument and its implementation within the SBA. The measures that were established and monitored through this process will provide valuable feedback for improving both school and institutional performance. Several unique elements of this assessment effort are that it was designed during the Fall 1999 semester and implemented in Spring 2000 -- a very aggressive timeframe; that it was accomplished with the active support and participation of all SBA faculty and administration; and that it was implemented at no additional cost to the School or the institution. Although the results are still being fully evaluated, the process itself has provided useful information, based on student and faculty feedback. Background The School of Business Administration is one of four Schools within this stateassisted baccalaureate level institution of approximately 2,600 students. There are 550 majors in the School of Business Administration and seventeen full-time faculty members. Each major must complete 48 hours in the required business core and another 30 hours in a business specialization. The core includes instruction in management, marketing, accounting, economics, communications, legal environment, and computers. It is this business core that was evaluated using a Primary Trait Analysis instrument. Literature In Assessment Essentials (1999), Palomba and Banta describe Primary Trait Analysis (PTA) as one of many assessment techniques that can be useful for classroom as well as program assessment. The PTA identifies key factors or traits that are used in evaluating an assignment or project, and a standard three- to five-point scoring scale is developed for each trait. Each score “is accompanied by an explicit statement that 219 describes performance at that level” (p 164). The higher the score, the more clear, complete, and accurate is the student’s performance on that particular trait. Specific examples of some of the Primary Traits that emerged through this process and their descriptions are provided below. Methodology The College has a standing assessment committee that meets on a regular basis. The committee comprises faculty representatives from each of the four Schools as well as the Provost, the Vice-Provost/Director of Institutional Research, a Dean, a Department Chair, the Assistant Dean of Student Affairs and the Assistant to the President. The College administration and the committee have been strong advocates for the assessment process as evidenced by their support for sending committee members to national conferences, holding on-campus seminars, and encouraging the use of external assessment consultants as appropriate. The SBA has established its own assessment committee that works in conjunction with this College-wide committee. One-third of the SBA faculty serve on the School’s assessment committee. The SBA representative to the College assessment committee serves as an ex-officio member to the SBA assessment committee. This committee meets on a regular basis and reports to the faculty and administration of the School; all recommendations from the committee require approval by the School’s entire faculty and administration. The first step in the School’s assessment process was to determine the educational outcomes expected by the School of Business Administration. The following outcomes were outlined based on institutional mission: 1. Students will develop critical thinking, decision making and problem solving skills in the application of appropriate business principles and practices. 2. Students will be proficient in computer applications. 3. Students will demonstrate verbal and written communication skills. 4. Students will be aware of the need for developing life long learning skills that will prepare them for entry into the business world and/or graduate educational opportunities. 5. Students will meet entry level requirements for employment in business. Next, faculty identified the method or methods that would be used to assess these outcomes. Based on information that SBA faculty learned during an on-campus workshop in Fall 1999, the School selected Primary Trait Analysis as one mechanism to measure the desired outcomes. Faculty were very receptive to the PTA process outlined by the consultant, and immediate steps were taken to apply this approach. 220 Historically, all business majors are required to take a capstone course, “Administrative Policies.” This course offers an opportunity for all students to exhibit the knowledge that they have acquired during their matriculation, specifically emphasizing knowledge related to the business core. This course provided an appropriate and logical venue in which to measure the educational outcomes of SBA majors. Data Sources The “Administrative Policies” course requires students to complete a comprehensive case analysis within a group/team setting. A formal presentation is made by the team to other students and faculty from the School. In order to identify the traits that would be assessed, all faculty attended the students’ presentations during Fall 1999. After observing these presentations, individual faculty members developed lists of potential primary traits. Early in the Spring Semester, 2000, the SBA assessment committee considered this information and compiled a working document of primary traits that could be used to assess student outcomes. After careful discussion, the committee agreed that the following six primary traits reflect outcomes expected of all business majors based on material in the business core: Critical Thinking, Accounting and Finance Knowledge, Marketing Knowledge, Use of Visual Aids, Oral Presentation, and Written Communication. These six traits were unanimously approved by the faculty of the SBA. In addition, they were reviewed and approved by the School’s external Advisory Council comprising representatives of local and regional businesses who provide feedback to the School. Statements were then developed to specify the exact outcomes for each trait that would correspond with each of the five levels on the evaluation scale. This represented one of the most time-consuming elements of the process since a number of meetings were required before the faculty were comfortable that the statements enabled them to satisfactorily distinguish various levels of performance. For example, the following statement reflects the outcomes for the highest score (5) in Critical Thinking: Students exhibited an advanced understanding of Business Principles by interpreting information, using appropriate models and techniques (financial ratios, strategic management matrices, economic concepts, etc.) and were able to logically draw conclusions and make appropriate strategic recommendations. In addition, students were able to defend their recommendations. Results During April 2000, all SBA faculty visited the classes and evaluated the students’ team presentations using specific statements such as that shown above. Two to three faculty members attended each presentation on a rotating basis. This pilot test provided an excellent trial of the traits and statements. Although the results will not be fully analyzed for another month, faculty and students have been impacted by the process. Clearly, a greater awareness exists of the need for student assessment and the 221 importance of faculty involvement. All faculty who participated learned something as a result of the process; they have commented about what they witnessed during the presentations and on the level of expectation that the students met. Based on general levels of performance in written and oral communication, these traits have already been identified as possibilities for improvement. However, no drastic changes will be undertaken for several semesters until real trends become clear. Overall, there were two major concerns to be addressed in Fall 2000. First, “Did the faculty fully understand the process of the primary trait analysis and how to use the traits in evaluating student presentations.” In addition, “ Did the students completely understand the bases on which they were being evaluated, and were they being adequately prepared?” Both the SBA Assessment Committee and the process have undergone changes as a result of the initial trial. The two department chairs are now responsible for leadership of the Assessment Committee along with a third faculty member co-chair. The Committee tabulated the results of the primary trait analysis and reached some preliminary conclusions. The Committee also surveyed faculty regarding their reactions to the process and to suggest possible improvements. The specific questions included: 1. 2. 3. 4. Are we using the relevant primary traits? If not, what would you suggest? Is the instrument easy to use? If not, what would make it easier for you? What refinements to these traits would you suggest? What can be done to help faculty facilitate the process? A majority of the faculty responded with valuable suggestions and comments. After reviewing their responses, the Committee made minor revisions to the trait scales and held a workshop on the course methodology related to the students’ case presentations. The workshop clarified the role of faculty in evaluating the presentations and ensured that all are fully aware of the evaluation criteria and levels of performance as defined. It is critical that faculty focus their evaluation on material that all students should have acquired through the business core. The course instructor also has revised his course requirements and techniques. For example, he has spent additional time explaining the goals of assessment and the process to students. They now understand the significance of integrating and relating the components of their presentations. Because of the importance placed on visual aids and graphics, students are now required to use PowerPoint (learned in the business studies core) as the basis of their presentation. Prior to preparing their presentations, all students will receive copies of the primary traits and evaluative statements, and they are required to maintain a log that lists team meeting dates and activities performed by each member of their group. 222 Conclusions and Implications This case study suggests that a sound assessment technique can be identified and implemented within a short period of time (one semester) given willingness, enthusiasm, and commitment by the School committee. Active participation by all faculty helped to achieve the buy-in required for this rapid implementation and represents a remarkable collaborative effort. In addition, this PTA was initiated using existing courses and faculty; no curricular changes were required, and no additional funds were necessary. The PTA promises to provide valuable information that will enable the School to improve its programs and enhance the overall performance of business majors. It is hoped that, through its early success, this process will become a model for other departments and schools at the College. 223 WEST LIBERTY STATE COLLEGE SCHOOL OF BUSINESS ADMINISTRATION ASSESSMENT 2000 EDUCATIONAL OUTCOMES 1. Students will develop critical thinking, decision making and problem solving skills in the application of appropriate business principles and practices. 2. Students will be proficient in computer applications. 3. Students will demonstrate verbal and written communication skills. 4. Students will meet entry level requirements for employment in business. PRIMARY TRAITS Critical Thinking Accounting and Finance Marketing Visual Aids Oral Presentation Skills Written Communication Skills 224 WORKSHEET – PRIMARY TRAIT SCALES CLASS: MGT 498 – ADMINISTRATIVE POLICIES GROUP PRESENTATION/COMPANY NAME: ______________________________ EVALUATOR: ______________________________ PRIMARY TRAITS Rating Scale 5 4 3 2 1 A. Critical Thinking _ _ _ _ _ B. Accounting/Finance _ _ _ _ _ C. Marketing _ _ _ _ _ D. Visual Aids _ _ _ _ _ E. Oral Presentation _ _ _ _ _ F. Written Communication _ _ _ _ _ 225 SCHOOL OF BUSINESS ADMINISTRATION ASSESSMENT CRITICAL THINKING 5. Students exhibited an advanced understanding of Business principles by interpreting information, using appropriate models and techniques (Financial Ratios, Strategic Management Matrices, Economic Concepts, etc.) and were able to logically draw conclusions and make appropriate Strategic recommendations. In addition, students were able to defend their recommendations. 4. Students exhibited an advanced understanding of Business principles by interpreting information, using appropriate models and techniques (Financial Ratios, Strategic Management Matrices, Economic Concepts, etc.) and were able to logically draw conclusions and make appropriate Strategic recommendations. Students were unable to defend their recommendations. 3. Students exhibited an understanding of Business principles by interpreting information, using appropriate models and techniques (Financial Ratios, Strategic Management Matrices, Economic Concepts, etc.). Students were able to draw conclusions (not necessarily logical) and make Strategic recommendations. Students were unable to defend their recommendations. 2. Students exhibited some understanding of Business principles but failed to properly interpret information or apply business models or techniques (Financial Ratios, Strategic Management Matrices, Economic Concepts, etc.). Students failed to draw conclusions or make Strategic recommendations. 1. Students exhibited no understanding of Business principles. Students did not interpret information or apply business models and techniques (Financial Ratios, Strategic Management Matrices, Economic Concepts, etc.) Students failed to draw conclusions or make Strategic recommendations. 10/23/00 226 SCHOOL OF BUSINESS ADMINISTRATION ASSESSMENT ACCOUNTING AND FINANCE 5. Students exhibited an advanced understanding of Accounting and Financial concepts by applying and interpreting appropriate techniques, models and data (Income Statements, Balance Sheets, Financial Ratios, etc.) and were able to logically draw conclusions and make appropriate Financial recommendations. In addition, students were able to defend their recommendations. 4. Students exhibited an advanced understanding of Accounting and Financial concepts by applying and interpreting appropriate techniques, models and data (Income Statements, Balance Sheets, Financial Ratios, etc.), and were able to logically draw conclusions and make appropriate Financial recommendations. Students were unable to defend their recommendations. 3. Students exhibited an understanding of Accounting and Financial concepts by applying and interpreting appropriate techniques, models and data (Income Statements, Balance Sheets, Financial Ratios, etc.). Students were able to draw conclusions (not necessarily logical) and make Financial recommendations. Students were unable to defend their recommendations. 2. Students exhibited some understanding of Accounting and Financial concepts but failed to properly interpret the appropriate techniques, models and data (Income Statements, Balance Sheets, Financial Ratios, etc.). Students failed to draw conclusions or make Financial recommendations. 1. Students exhibited no understanding of Accounting and Financial concepts. Students were unable to apply or interpret the appropriate techniques, models and data (Income Statements, Balance Sheets, Financial Ratios, etc.). Students failed to draw conclusions or make Financial recommendations. 10/23/00 227 SCHOOL OF BUSINESS ADMINISTRATION ASSESSMENT MARKETING 5. Students exhibited an advanced understanding of the principles of Marketing by interpreting information about target market selection and the development of product, distribution, price, and promotion and were able to logically draw conclusions and make appropriate Marketing recommendations. In addition, students were able to defend their recommendations. 4. Students exhibited an advanced understanding of the principles of Marketing by interpreting information about target market selection and the development of product, distribution, price, and promotion and were able to logically draw conclusions and make appropriate Marketing recommendations. Students were unable to defend their recommendations. 3. Students exhibited an understanding of the principles of Marketing by interpreting information about target market selection and the development of product, distribution, price, and promotion. Students were able to draw conclusions (not necessarily logical) and make Marketing recommendations. Students were unable to defend their recommendations. 2. Students exhibited some understanding of principles of Marketing but failed to properly interpret information about target market selection and the development of product, distribution, price, and promotion. Students failed to draw conclusions or make Marketing recommendations. 1. Students exhibited no understanding of the principles of Marketing. Students did not interpret information about target market selection and the development of product, distribution, price, and promotion. Students failed to draw conclusions or make Marketing recommendations. 10/23/00 228 SCHOOL OF BUSINESS ADMINISTRATION ASSESSMENT VISUAL AIDS 5. Students exhibited an advanced knowledge of the principles of Visual Presentations Techniques and utilized up-to-date computer generated visuals. The visual aids enhanced the viewer’s understanding of the material being presented. The presentation was well rehearsed; the visuals were an integral part of the presentation. 4. Students exhibited an advanced knowledge of the principles of Visual Presentation Techniques but did not utilize up-to-date computer generated visuals. The visual aids enhanced the viewer’s understanding of the material being presented. The presentation was well rehearsed; the visuals were an integral part of the presentation. 3. Students exhibited some knowledge of the principles of Visual Presentation Techniques and utilized up-to-date computer generated visuals. The visual aids did little to enhance the viewer’s understanding of the material being presented. The presentation was not well rehearsed. 2. Students exhibited limited knowledge of the principles of Visual Presentation Techniques and did not utilize up-to-date computer generated visuals. The visual aids did little to enhance the viewer’s understanding of the material being presented. The presentation was not well rehearsed. 1. Students exhibited no knowledge of the principles of Visual Presentation Techniques. The visual aids did nothing to enhance the viewer’s understanding of the material being presented. 10/23/00 229 SCHOOL OF BUSINESS ADMINISTRATION ASSESSMENT ORAL PRESENTATION SKILLS 5. Students exhibited an advanced understanding Oral Presentation Skills by utilizing appropriate delivery methods such as speaking from notes, using simple language, providing frequent summaries of key points, using appropriate voice quality, maintaining effective audience eye contact, providing a strong and effective opening and closing, and effectively handling the question-and-answer session. 4. Students exhibited an advanced understanding of Oral Presentation Skills by utilizing appropriate delivery methods such as speaking from notes, using simple language, providing frequent summaries of key points, using appropriate voice quality, maintaining effective audience eye contact, and providing a strong and effective opening and closing. Students were unable to effectively handle the question-and-answer session. 3. Students exhibited an understanding of Oral Presentation Skills by utilizing appropriate delivery methods such as speaking from notes, using simple language, providing frequent summaries of key points, using appropriate voice quality, maintaining effective audience eye contact, but did not provide a strong and effective opening and closing. Students were also unable to effectively handle the question-and-answer session. 2. Students exhibited some understanding of Oral Presentation Skills but poorly utilized appropriate delivery methods such as using simple language, frequent summaries of key points, and appropriate voice quality. Students failed to maintain effective audience eye contact and did not provide a strong and effective opening and closing. Students were also unable to effectively handle the question-and answer session. 1. Students exhibited no understanding of Oral Presentation Skills. Students failed to utilize appropriate delivery methods such as using simple language, frequent summaries of key points, and appropriate voice quality. Students also failed to maintain effective audience eye contact and did not provide a strong and effective opening and closing. Students were also unable to effectively handle the question-and-answer session. 10/23/00 230 SCHOOL OF BUSINESS ADMINISTRATION ASSESSMENT WRITTEN COMMUNICATION SKILLS 5. Students exhibited an advanced understanding of Written Communications Skills by utilizing appropriate writing techniques for reports such as sequencing information in a logical order, with a clearly defined purpose, an appropriate introduction which explains what and why, a body that explains how, where or how much, and developing conclusions that support the body of the report. Students utilized appropriate report format that is easy to read, including appropriate graphics and headings which lead the reader through the information in a consistent manner. Students also utilized an appropriate tone, convincing and precise language, and simple sentences utilizing correct spelling and grammar. 4. Students exhibited an advanced understanding of Written Communications Skills by utilizing appropriate writing techniques for reports such as sequencing information in a logical order, with a clearly defined purpose, an appropriate introduction which explains what and why, a body that explains how, where or how much, and developing conclusions that support the body of the report. Students utilized appropriate report format that is easy to read, including appropriate graphics and headings which lead the reader through the information in a consistent manner. Students also utilized an appropriate tone, convincing and precise language, and simple sentences utilizing correct spelling and grammar. Students were unable to effectively present the data in a consistent manner. 3. Students exhibited an understanding of Written Communications Skills by utilizing appropriate writing techniques for reports such as sequencing information in a logical order, with a clearly defined purpose, an appropriate introduction which explains what and why, a body that explains how, where or how much, and developing conclusions that support the body of the report. Students utilized appropriate report format that is easy to read, including appropriate graphics and headings which lead the reader through information in a consistent manner. Students also utilized an appropriate tone, convincing and precise language, and simple sentences utilizing correct spelling and grammar. Students were unable to effectively present the data in a consistent manner. 2. Students exhibited some understanding of Written Communications Skills but failed to utilize appropriate writing techniques for reports. Students failed to use an appropriate report format that is easy to read, and did not utilize headings which would lead the reader through the information in a consistent manner. 1. Students exhibited no understanding of Written Communications Skills. Students did not utilize appropriate writing techniques for reports. Students failed to utilize appropriate tone, convincing and precise language, and simple sentences utilizing correct spelling and grammar. 10/23/00 231 232 THE IMPACT OF REMEDIAL ENGLISH COURSES ON STUDENT COLLEGELEVEL COURSEWORK PERFORMANCE AND PERSISTENCE Meihua Zhai Director of Institutional Research Office of Planning & Analysis West Chester University of PA Jennie Skerl Associate Dean, College of Arts & Sciences West Chester University of PA Introduction This study of remedial English course at West Chester University was undertaken at the request of the Developmental Education Task Force, which Dr. Skerl chaired and which had representatives from the English Department, the Mathematics Department, and developmental education support services. One of the charges was for the Task Force to review the structure and effectiveness of remedial English and Mathematics courses and to propose to the Provost alternative structures if warranted by the review. West Chester University’s policy indicates that, “Placement in the appropriate composition course is determined by the score on the SAT and/or by performance on a placement test administered by the Department of English.” (p. 33, West Chester Undergraduate Catalog, 1999-2000). SAT Verbal (SAT-V) scores and an optional placement writing challenge exam are used to determine whether students must first be placed in a zero-level remedial composition course before being permitted to enroll in 100-level English courses, which are the college-level required courses. The cutoff score for remedial English placement was 450 SAT-V before the recentering, and 500 after it. Students must earn a grade of C- or better in order to pass the zero-level remedial courses before they are permitted to enroll in the 100-level courses. West Chester University requires all students to take two college-level composition courses as part of their general education requirements. Although a very large percentage of entering freshmen at WCU are placed in these courses (about one-third in English and fourteen percent in Mathematics remedial programs,) there had been no comprehensive evaluation of the effectiveness of these courses since their inception over 20 years ago. Therefore, the Task Force asked Dr. Zhai from Office of Planning & Analysis to study the impact of remedial programs. Results and analyses about remedial Mathematics were presented at the 26th NEAIR conference. Since then we updated our initial studies of remedial English course. Results and analyses are presented here. 233 As pointed out by Weissman, Bulakowski and Jumisko (1997): “The purpose of remedial courses is to enable students to gain the skills necessary to complete collegelevel courses and academic programs successfully.” Based on these guidelines, this study tried to examine the following issues: (1) To what extent are the remedial English courses effective in preparing students for their college-level required English courses? (2) To what extent do the remedial English courses contribute to students’ academic success as shown by their retention and graduation rates? Methodology Data Student course grades for the remedial (ENG 020) and two other required collegelevel English courses (ENG 120 & ENG 121), their SAT-V scores, admission type, enrollment status and graduation records were used in this study. Data were taken from the University’s historical snapshots and the Student Flow Models maintained by the Office of Planning & Analysis. This study covers the period from Fall 1992 to Spring 2000. Selection of the Comparison Group (Control Group) One of the major challenges facing the evaluation of remedial course impact in this four-year public institution is the lack of student comparison groups due to the remedial course placement policy adopted by the university. For this study it is assumed that, in order for a remedial program to be judged effective, it ought to help some students succeed who otherwise would most likely fail their college-level coursework. It was also assumed that, if the English remedial program can help some under-prepared students to succeed, it would fulfill its function. In order to ensure reasonably informative comparisons, the control or comparison group used for this study were those students who scored no more than 50 points higher than the SAT-V cutoff score for placement into the remedial program. The cutoff score for remedial English was 450 before the recentering of SAT in fall 1996 and 500 after the recentering. As a result, the placement score for the Control Group was SAT-V above the cutoff score, but below or equal to 500 (550 after the recentering). Due to WCU’s policy, an entering student with SAT-V below 450/500 may be placed out of remedial program if that student takes the English placement test and successfully passes it. A student may also be placed out of ENG 020 if that student has Advanced Placement credit or transferred credits from comparable English composition courses. In the forthcoming analysis, this group of students will be separated from the remedial and the comparison group. 234 Definition of Terms Student Groups: • • • • • remedial group - students who took at least one remedial English course during their matriculation in the University placed-out group - students with SAT-V below the cut-off score (450/500) who were placed out of the remedial program by taking a placement test given by the English Department Control Group - students whose SAT-V were high enough to place them out of the remedial program but lower than 500 (550 after the recentering) college-ready - students whose SAT-V scores were higher than 500/550 no-SAT-V - students with no SAT-V (transfer and non-traditional students) Admission Status: West Chester University admits students in four categories: regular admission and three categories of special admissions for those students who do not meet the criteria for regular admission: Academic Development Program Act 101, Academic Development Program non-Act 101, and Special Admit Motivational. The minimum qualifications for each category are as follows: • • • • Regular Admit - Academic program continued into senior year; combined SAT of 1000; High School Rank 50%; and Honors or AP classes a plus Academic Development Program Act 101 (ADP Act 101) - Verbal SAT 380; Math SAT 340; High School Rank 40%; and GPA 2.0 Academic Development Program Non-Act 101 (ADP Non-Act 101) - Similar as ADP Act-101, but without special financial assistance Special Admit Motivational (Special Admit) - Verbal SAT 480; Math SAT 450; High School Rank 60%; and GPA 2.7 Outcome Measures Three major outcome measures were employed to assess the impact of the remedial program. They are: (1) student performance in college-level English composition courses; (2) second-year retention rates and (3) six-year graduation and retention rates. Outcome measures were collected and compared between remedial students and students in the Control Group. It is NOT the intention of this study to compare developmental students with other college-ready students. Information concerning other students was included in this study for reference only. 235 Statistics Chi-square statistics were used to compare student course passing rates between remedial students and the Control Group. A grade of C- or better was considered a passing grade. One-way ANOVA was used to detect course performance differences on college-level English work. Due to the large sample size (4,388 records for ENG 020, 11,247 for ENG 120 and 14,305 for ENG 121), all statistical analyses yielded significant statistical results even when the magnitude of the difference was of little practical concern (for example a GPA of 2.69 vs. 2.83). As a result, statistical results were not reported. Instead, emphases were placed on the practical application of the findings when pertinent. Detailed statistical results are available upon request. Results and Analyses Course-Takers From fall 1992 to spring 2000, there were 4,060 students who took ENG 020 and 328 of them had to repeat the course at least once. The majority of ENG 020 course takers were first-time, full-time degree-seeking students. Taking the Fall 1999’s ENG 020 class for example: There were 564 students enrolled in the course. About 98% of them were first-time, degree seeking students. Of the 564 students, 59% were Regular Admits, 5%, ADP-Act 101, 7%, ADP-Non Act, and 27%, Special Admits. Table 1 tabulates the class profile for Fall 1999. Table 1. Summaries of ENG 020 Class Profile for Fall 1999 ENG 020 Fall 1999 Class Admission Status N ADP ADP - Non Act 101 Regular Admit Special Admission Regular Admit (Transfer) Admission Info Missing 29 43 334 150 3 5 564 995 Freshman Cohort % within the Class 5.14 7.62 59.22 26.60 0.53 0.89 SAT-V 408 417 469 448 # by Adm Type 48 77 1,374 201 % taking 020 60.42 55.84 24.31 74.63 1,700 Remedial Student Course-Taking Patterns Student course-taking pattern tracking showed that the majority of the students took ENG 020 in fall. If he/she passed the course by earning a grade of C- or better, he/she would proceed to take ENG 120 in spring and ENG 121 the following fall. If a student failed to pass ENG 020, he/she would usually repeat it in spring and then moved on to take ENG 120 the following fall, if he/she passed ENG 020. Table 2 provides a brief summary of the course passing status in the past 8 years. The total number in Table 2 is not unduplicated headcount. If a student took ENG 020 twice, once with a grade below 236 C- and once with a grade C- or better, that student will be counted once in the Pass and once in the Fail to Pass. As shown in Table 2, the success rate for ENG 020 was about 87%. Table 2. Summaries of ENG 020 Student Course Grade Distribution Frequency Valid Pass Percent Valid Percent Cumulative Percent 3818 87.0 87.0 87.0 Fail to Pass 450 10.3 10.3 97.3 Withdraw 120 2.7 2.7 100.0 4388 100.0 100.0 4388 100.0 Total Total After passing ENG 020, about 80% (3235/4060) of the remediated students proceed to take ENG 120. Remediated Student Course Performance in ENG 120 In order to see how remedial English helped preparing the students for their collegelevel course work, we first took a look at student course performance in ENG 120. Table 3 presents student course completion rates by the five student groups. Table 3. Summaries of ENG 120 Student Grade Distribution by Student Comparison Groups ENG 120 Course Passing Grade Student Comparison Groups Remediated Students Pass (C- or Better) Fail to Pass (Below C-) 3235 204 128 3567 90.7% 5.7% 3.6% 100.0% 1271 92 37 1400 90.8% 6.6% 2.6% 100.0% 2756 176 94 3026 91.1% 5.8% 3.1% 100.0% 1743 124 65 1932 90.2% 6.4% 3.4% 100.0% 1028 144 150 1322 % within Student Comparison Groups 77.8% 10.9% 11.3% 100.0% Count 10033 740 474 11247 % within Student Comparison Groups 89.2% 6.6% 4.2% 100.0% Count % within Student Comparison Groups PlacedOut - No Remedial, SATV below 450/500 (Placed-out) Control - No Remedial, SATV >=450/500 and <500/550 College-Ready, SATV >500/550 Count % within Student Comparison Groups Count % within Student Comparison Groups Count % within Student Comparison Groups No SATV Total Count 237 Withdraw Total According to Table 3, 90.7% of remediated students who took ENG 120 successfully passed the course, compared with 90.8% in the placed-out group, 91.1% in the Control Group, 90.2% in the college-ready group, and 77.8% in the no-SAT-V group. A study of the means of student course grades for the various groups in Table 4 shows that not only the passing rates between the remediated and the control groups were very comparable, the means were also very close. The mean course grade was 2.70 for the remediated students, 2.64 for the placed-out group, 2.74 for the Control Group, 2.863 for the college-ready group, and 2.73 for the no-SAT-V group. Table 4. Comparisons of Student Course Performance in Eng 120 ENG 120 Course Grade Student Comparison Groups Mean N Std. Deviation Remediated Students 2.6969 3439 .7968 PlacedOut - No Remedial, SATV below 450/500 (Placed-out) 2.6390 1363 .8390 Control - No Remedial, SATV >=450/500 and <500/550 2.7474 2932 .8427 College-Ready, SATV >500/550 2.8647 1867 .8945 No SATV 2.7311 1172 1.1466 Total 2.7361 10773 .8780 Remediated Student Course Performance in ENG 121 An examination of remediated students’ performance in ENG 121 revealed similar results as found in ENG 120. Tables 5 & 6 exhibits how remediated students performed in ENG 121 compared with the Control Group. According to Table 5, the passing rates for the remediated and the Control Group were very close: 82.3% for the former and 84.2 for the latter. In general about 81.3% of students who took ENG 121 pass the course. Results in Table 6 reveal that even though remediated students tend to have a similar passing rate as their non-remediated peers, their individual grades tend to be lower than those earned by their peers. For example, the mean grade for the remediated group was 2.54, as shown in Table 6, while the mean grades for the control and college-ready groups were 2.66 and 2.81 respectively. 238 Table 5. Summaries of ENG 121 Student Course Grade Distribution by Student Comparison Groups ENG 121 Course Passing Grade Student Comparison Groups Remediated Students Count % within Student Comparison Groups PlacedOut - No Remedial, SATV below 450/500 (Placed-out) Control - No Remedial, SATV >=450/500 and <500/550 College-Ready, SATV >500/550 Count % within Student Comparison Groups Count % within Student Comparison Groups Count % within Student Comparison Groups No SATV Total Count Pass (C- or Better) Fail to Pass (Below C-) 2405 324 195 2924 82.3% 11.1% 6.7% 100.0% 1406 238 92 1736 81.0% 13.7% 5.3% 100.0% 1846 228 118 2192 84.2% 10.4% 5.4% 100.0% 1751 222 175 2148 81.5% 10.3% 8.1% 100.0% Withdraw Total 4220 543 542 5305 % within Student Comparison Groups 79.5% 10.2% 10.2% 100.0% Count 11628 1555 1122 14305 % within Student Comparison Groups 81.3% 10.9% 7.8% 100.0% Table 6. Comparisons of Student Course Performance in Eng 121 ENG 121 Course Grade Student Comparison Groups Mean N Std. Deviation Remediated Students 2.5418 2729 .9945 PlacedOut - No Remedial, SATV below 450/500 (Placed-out) 2.4322 1644 1.0181 Control - No Remedial, SATV >=450/500 and <500/550 2.6558 2074 1.0255 College-Ready, SATV >500/550 2.8083 1973 1.1010 No SATV 2.7450 4763 1.0911 Total 2.6593 13183 1.0611 Results from this analysis confirm the findings by Weissman, Silk and Bulakowski (1997), who found that although the average GPA for the remediated students was not as high as that of college-ready students, remediated students performed at above a C average in their college-level courses. For our study, we found that our remediated students averaged a B- in ENG 120, just as the rest of their peers. Remediated students tend to earn C+ in ENG 121 compared with an average of B- for the control and the college-ready groups. Since the University allows students with high SAT-V to skip ENG 120 by taking ENG 121 directly, we saw more college-ready students in the analysis of ENG 121 than in ENG 120. 239 Remediated Student Second-Year Retention Rates The second measure used to assess the impact of remedial English course was student second-year retention rates. In order to get more accurate assessment of the impact that the remedial English program had on student persistence and graduation rates, only firsttime, full-time degree-seeking remedial student retention and graduation rates were used. As a result, the following comparisons and analyses will be based on cohort data, instead of student course class. Table 7 summarizes the percentage of students taking remedial English. Table 8 presents the second-year retention rates when the same cohort were regrouped according to if they had taken remedial English or not. Table 7. Summaries of First-time, Full-time, Degree-seeking Students Taking Remedial English Fall 1992 – 1999 Cohort Taking Remedial ENG Year N % 1992 388 28.53 1993 422 30.89 1994 503 37.09 1995 448 32.58 1996 507 35.04 1997 536 34.12 1998 507 31.30 1999 544 32.00 Multi-year Average 32.68 1st Fall Enrolled NonRemedial N % 972 71.47 944 69.11 853 62.91 927 67.42 940 64.96 1,035 65.88 1,113 68.70 1,156 68.00 67.32 Total Cohort 1,360 1,366 1,356 1,375 1,447 1,571 1,620 1,700 According to Table 7, in fall 1992, there were 1,360 students enrolled as first-time, full-time, degree-seeking students. Of them, 388 (28.53%) took ENG 020 that fall. Table 8 revealed that the second-year retention rates for the 1992 cohort were: 82% for non-remedial students and 89.89% for the remediated students. For the 1993 cohort, the rates were 80% for non-remedial course takers and 91% for remediated students. Generally speaking, remediated students seem to have higher second-year retention rate than the rest. One factor we will need to consider is that in WCU, ADP students are committed to enroll for two years. 240 Table 8. Comparisons of Second-Year Retention Rates Between Remediated and Non-Remediated First-time, Full-time, Degree-seeking Students Remedial Cohort N % Retained 1992 360 92.78 1993 388 91.94 1994 411 81.71 1995 364 81.25 1996 400 78.90 1997 446 83.21 1998 417 82.25 1999 468 86.03 Multi-year Average 84.76 2nd Fall Retention Rate NonRemedial N % Retained 770 79.22 748 79.24 671 78.66 763 82.31 770 81.91 853 82.42 935 84.01 948 82.01 81.22 Total N 1130 1136 1082 1127 1170 1299 1352 1416 % Retained 83.09 83.16 79.79 81.96 80.86 82.69 83.46 83.29 Table 9 gives the second-year student retention rates by the University’s admission types. According to Table 9, both ADP and Special Admit students have comparable second-year retention rates as the Regular Admit. As a result, the higher second-year retention rate for the remediated students as shown in Table 8 might be due to those students’ enrollment commitment as well. More evidence is needed to assess remedial English program’s impact on the retention issue. Table 9. Second-Year Retention Rates For First-time, Full-time, Degree-seeking Student Cohorts Regular Admit ADP-ACT 101 ADP-Non ACT 101 Special Admit Fall Cohort 1992 83.5% 72.2% 82.9% 85.3% 1993 82.9% 76.8% 90.9% 85.6% 1994 78.9% 83.6% 87.9% 82.4% 1995 82.4% 77.2% 80.6% 80.7% 1996 80.5% 81.8% 89.1% 80.1% 1997 82.5% 90.7% 83.1% 81.7% 1998 82.9% 89.7% 89.5% 83.7% 1999 82.4% 83.3% 94.8% 85.1% Multi-Year Average 82.0% 81.9% 87.4% 83.1% Remediated Student Six-Year Graduation and Retention Rates The third measure used to assess the remedial English program’s impact is the sixyear retention and graduation rate. Table 10 present comparisons of the six-year retention rates for students with or without taking remedial coursework. The six-year retention and graduation rates were based on three cohorts from 1992 to 1994. 241 Table 10. Six-Year Graduation and Retention Rates for Fall 1992 - 94 First-Time, FullTime, Degree-Seeking Student Cohorts as of Fall 2000 6-year Retention & Graduation Rates Cohort Year 1992 1993 1994 Remediated Students No ENG 020 Enrl Enrl (7thFall) Graduated (7thFall) Grad+Enrl Graduated 245 230 220 16 16 8 495 488 398 14 26 22 Grad+Enrl % % 261 246 228 Retention Rate 1992 1993 1994 63.1 54.5 43.7 4.1 3.8 1.6 67.3 58.3 45.3 50.9 51.7 46.7 1.4 2.8 2.6 52.4 54.4 49.2 Average 53.8 3.2 57.0 49.8 2.3 52.0 % % 509 514 420 Retention Rate Table 10 shows that the six-year retention and graduation rate for remediated students was 57%, about 5% higher than those who didn't take ENG 020. Six-year retention rates of remediated students were also compared with those of other student groups and the results were tabulated in Table 11. Those rates were also based on the averages of Fall 1992 - 94 cohorts. For example, the six-year retention and graduation rate was 56.7% for the Regular Admit, 55.7% for the Special Admit, 29.8% for ADP Act 101, and 48.3% for ADP Non-Act 101. The general six-year retention rate was 54.9% for the University. Table 11. Comparisons of Six-Year Graduation and Retention Rates Between Remediated Students and Other Student Groups Admission Type Retention Graduation Regular Admit 2.4 54.3 ADP-Act 101 5.2 24.6 ADP-Non Act 101 1.8 46.5 Special Admit 2.1 53.6 Remediated 3.2 53.8 Non-Remedial 2.3 49.8 University Total 2.5 52.4 25 National Average (CSRDE, 2000) Moderately selective26 44.2% Selective 53.6% 25 26 Total Ret. & Grad. 56.7 29.8 48.3 55.7 57.0 52.0 54.9 Institution size 5,000 - 17,900 Moderately Selective SATs 900 - 1044; Selective SATs 1045 - 1100 242 Table 11 also gives a national average student retention rate as reported by the Consortium for Student Retention Data Exchange (CSRDE), in May 2000. CSRDE reported that the national averages for six-year retention and graduation rates were 53.6% for selective institutions and 44.2% for moderately selective institutions. WCU is one of the “Moderately Selective Institutions” based on CSRDE’s criteria. According to CSRDE, not only West Chester University’s general six-year retention and graduation rates were above the national norm (54.9% vs. 44.2%), its remediated first-time degreeseeking students’ six-year retention rate was even higher than the University’s average (57% vs. 54.9%). The retention rate index for Selective Institutions was 53.6%, according to CSRDE. West Chester University’s 6-year student retention and graduation rates was 54.9%, slightly higher than 53.6%. Conclusions & Recommendations Based on the findings from this study, we concluded: 1. ENG 020 prepares students effectively for ENG 120 and 121. 2. ENG 020 supports students’ overall academic success, as measured by retention and graduation. 3. The academic success of ENG 020 students, and their strong showing in subsequent writing courses, suggests that the placement procedure is appropriate. Our findings and conclusions led to the following recommendations pertaining to the English remedial course: 1. Given the success of English 020, a major overhaul is not necessary; however, Task Force members believe that the program can be improved and updated according to current best practices. The Task Force recommends that the English Department consider the following alternative structures for its developmental composition program: smaller classes; two-semester courses; an expanded Writing Center which works more closely with instructors and students in ENG 020; studio courses; more frequent class meetings. 2. A special information meeting should be scheduled as part of summer Orientation for students placed in zero-level courses and their parents. English Department representatives would have the opportunity to explain to students and their parents: the WCU English placement policy and procedure, the educational rationale, and—most important—the benefits of placement in ENG 020. 3. Communication with our feeder high schools about our academic standards and placement criteria for English should be improved via information on the University website, distribution of a brochure/information sheet to teachers and school officials, and meetings between Admissions staff and school officials. 243 References Center for Institutional Data Exchange and Analysis. (1997-98). CSRDE Report: the retention and graduation rates of 1989-96 entering freshman cohorts in 232 U.S. colleges and universities. Norman, OK: Center for Institutional Data Exchange and Analysis Cohen, J. (1988). Statistical power analysis for the behavior sciences. (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers. Ewell, P. T. (1987). Principle of longitudinal enrollment analysis: conducting retention and student flow studies. In J. A. Muffo, & G.W. McLaughlin (Eds.), A primer on institutional research. Tallahassee, FL: Association for Institutional Research. Weissman, J., Bulakowski, C., & Jumisko, M. K. (1997). Using research to evaluate developmental education programs and policies. In J. M. Ignash (ed.), Implementing effective policies for remedial and developmental education, New Directions for Community Colleges: No. 100 (pp. 73-80). San Francisco: Jossey-Bass. Weissman, J., Silk, E., and Bulakowski, C. (1997). Assessing developmental education policies. Research in Higher Education, 38(2), (pp. 187-200). 244 NEAIR 27th Annual Meeting Saturday, November 4th, 2000 1:00 - 5:00 pm Duquesne Room - Lower Lobby 2:00 - 5:00 pm Forbes - Lower Level Karen Bauer Assistant Director of Institutional Research and Planning University of Delaware, NEAIR Past-President 2:00 - 5:00 pm Stanwix - Lower Level Mary Ann Coughlin Professor of Research & Statistics Springfield College, NEAIR Treasurer 2:00 - 5:00 pm Heinz - Lower Level William E. Knight Director of Planning and Institutional Research Bowling Green State University Corby A. Coperthwaite Director of Planning, Research and Assessment Manchester Community College 2:00 - 5:00 pm Board Room - Lower Level Anne Marie Delaney Director of Institutional Research Babson College, NEAIR President-Elect 6:00 - 7:00 pm King's Garden North and South Mezzanine Level Conference Program Conference Registration Newcomers to Institutional Research, Part 1 This workshop is designed for new practitioners who engage in IR activities. This workshop addresses key components of IR including defining critical issues for institutional research, identifying sources of data, developing fact books and other reports, and conducting effective survey research for assessment and evaluation. The main focus is a presentation of general concepts and practical strategies for the implementation of continued development of effective IR at many schools, regardless of size or type. Pre-Conference Workshop Statistics for Institutional Research Basic ideas in statistics will be covered in a way that is useful as an introduction or as a refresher to statistics. Descriptive statistics, sampling and probability theory as well as the inferential methods of chi-square, t-test and Pearson’s r will be covered. May be taken with or without the follow-up advanced workshop. Pre-Conference Workshop Path Analysis for Beginners This workshop will introduce path analysis in a hands-on and straightforward manner, targeting the areas of assessment and enrollment management research. Data from the presenters’ institutions will be utilized and detailed handouts provided. Attendees with laptops and copies of SPSS AMOS 4.0 are encouraged to bring them, but not required. Pre-Conference Workshop Research Design Ideas for Institutional Researchers The primary goal of this workshop is to enhance institutional researchers’ capacity to produce policy relevant studies for planning and decision-making. Specific objectives include enabling participants to translate data into information; to transform reporting into research; and to prepare methodologically sound, practically useful research reports for their institutions. The workshop will demonstrate how the institutional researcher can use principles of research design and selected research techniques to transform data collection activities into decision-oriented research projects. Pre-Conference Workshop Early Bird Reception sponsored by SPSS 245 NEAIR 27th Annual Meeting Sunday, November 5th, 2000 8:00 - 4:30 pm Ballroom 4 - Mezzanine Level 9:00 - noon Rivers - Mezzanine Level J. Fredericks Volkwein The Pennsylvania State University, NEAIR President 9:00 - noon Brigade - Mezzanine Level Karen Bauer Assistant Director of Institutional Research and Planning, University of Delaware, NEAIR Past-President 9:00 - noon Traders - Mezzanine Level Mary Ann Coughlin Professor of Research & Statistics Springfield College, NEAIR Treasurer 9:00 - noon Chartiers - Mezzanine Level Jim Fergerson Director of Institutional Planning & Analysis Bates College John Pryor Director of Undergraduate Evaluation & Research Dartmouth College Conference Program Conference Registration The Three Stages of Enrollment Management Enrollment management is a component of institutional effectiveness and quality control. At the first stage, enrollment management includes attracting, admitting, and enrolling students. This is the set of admissions activities that campus managers traditionally think of as constituting the core of Enrollment Management. At the second stage lies activities that surround the new student experience -- activities that ensure the student's successful introduction and integration into the institution. At the third stage, enrollment management focuses upon the quality and totality of the student experience -- experiences and factors producing high academic performance, student persistence to degree completion, and success in the world beyond the campus. Pre-Conference Workshop Newcomers to Institutional Research, Part 2 Continuation; Part 1 is a pre-requisite. Pre-Conference Workshop Advanced Statistics for Institutional Research This workshop will deal with advanced issues in inferential statistics. Topics such as Analysis of Variance, Factor Analysis, Multivariate Regression, and Logit/Probit models will be covered and contrasted with other statistical tools and techniques. A case study approach will be used illustrating applications of these statistical techniques in institutional research. *Open to those who have completed the introductory workshop Saturday afternoon or who have an equivalent background. Pre-Conference Workshop Designing and Conducting Web-based Surveys This workshop will provide an introduction to designing and conducting successful web-based surveys. The presenters will address administrative and methodological concerns and technological issues. Workshop topics will include items such as contacting a sample via email, maintaining general security and limiting accesses to the survey to pre-selected individuals, guarding against multiple responses, and keeping user information attached to responses. There will be an introduction to setting up an HTML survey form, and an overview of some of the software that is available to facilitate a webbased survey. The workshop will include demonstrations, but is not designed to be hands-on. Pre-Conference Workshop 246 NEAIR 27th Annual Meeting Sunday, November 5th, 2000 Conference Program Noon - 1:30 pm 1:30 - 4:30 pm Traders - Mezzanine Level Craig Clagett Vice President Planning, Marketing, and Assessment Carroll Community College Lunch on your own Office Management and Information Dissemination Strategies for New Directors of Institutional Research Designed for institutional researchers who have recently become directors, this workshop focuses on office management strategies and techniques for effective information dissemination. Topics covered include environmental scanning, office staffing, staff incentive and recognition programs, office project management systems, principles of tabular and graphical data presentation, print and electronic reporting. Pre-Conference Workshop Surveys of Students and Faculty: Using Good Practices and the Internet to Lower Costs and Increase Response Rates This workshop explains how to combine good survey practices with easy to learn Internet technologies to enable institutional researchers to conduct quick and lowcost Internet surveys with high response rates. The workshop covers topics such as the pros and cons of paper and electronic surveys, the skills and software needed for electronic surveys, and survey administration over the web. Pre-Conference Workshop Maintaining Our Bridges - What Do We Really Know About IT? Information technologies are a part of the critical connecting infrastructure of our campuses and increasingly a center of attention at the highest levels of our institutions. We’re wired up, unplugged, webified, informated, reengineered, e-everythinged. We’ve shifted paradigms, danced with devils, gone the “distance”, and managed transitions, quality, and customer relationships, And yet, do we really know what it takes to sustain our technology-rich environments? Opening Plenary Session President's Reception sponsored by Principia Products Michelle Appel Director of Institutional Research Carroll Community College 1:30 - 4:30 pm Brigade - Mezzanine Level Stephen R. Porter Director of Institutional Research, Wesleyan University Paul D. Umbach Graduate Research Assistant University of Maryland, College Park 5:00 - 6:00 pm Ballroom 3 - Mezzanine Level David Smallen Director of ITS Hamilton College David is the recent recipient of the Educause Leadership Award Immediately following plenary session King's Garden North King's Garden South and Bateau Mezzanine Level Banquet and Entertainment sponsored by the Center for the Study of Higher Education at The Pennsylvania State University Chamber Music provided by IL Quattro Cash Bar 247 NEAIR 27th Annual Meeting Monday, November 6th, 2000 8:00 - 11:00 am Ballroom 4 - Mezzanine Level 7:15 - 8:30 am Ballroom 4 - Mezzanine Level Ellen Kanarek Vice President Applied Educational Research, Inc. Hailin Zhang Data Specialist, Institutional Research University of Massachusetts, Boston 7:30 - 8:30 am Brigade - Mezzanine Level Margaret K. Cohen Assistant Vice President for Institutional Research George Washington University 7:30 - 8:30 am Rivers - Mezzanine Level Emily Thomas Director of Planning and Institutional Research SUNY Stony Brook 7:30 - 8:30 am Traders - Mezzanine Level C. Anthony Broh COFHE 7:30 - 8:30 am King’s Terrace - Mezzanine Level Michelle Appel Director of Institutional Research Carroll Community College 8:30 - 9:15 am Brigade - Mezzanine Level John Pryor Director of Undergraduate Evaluation and Research Dartmouth College Conference Program Conference Registration Continental Breakfast sponsored Peterson's Concurrent Special Interest Groups Those interested in one of the special interest groups may pick up breakfast and take with them to the sessions. In addition, there will be several table topics at breakfast: ASQ Users First Year in Institutional Research? New to IR? Join one of your fellow colleagues in discussing joy, sorrows, successes and failures of your first year in a new profession. Banner Users Special Interest Group This informal session provides an opportunity to meet other Banner Users, discuss problems, and share solutions. It is an open forum where all who are interested have the opportunity to set the agenda. Everyone – novice and veteran Bannerites – are welcome. SIG PeopleSoft Users Special Interest Group SIG COFHE COFHE members will meet for a SIG. Datatel Users Group SIG A Diversity Needs Assessment for Staff A three-year long process prefaced the administration of this diversity tool at a small private liberal arts college. The presentation will outline the creation of this NEAIR research grant funded tool – including the many discussions, obstacles, re-directions, frustrations and triumphs along the way to getting the support for the project. Results of the survey will be shared along with the reactions to those results. Research Paper 248 NEAIR 27th Annual Meeting Monday, November 6th, 2000 8:30 - 9:15 am Rivers - Mezzanine Level Meihua Zhai Director of Institutional Research West Chester University of PA Jeff Himmelberger Coordinator of Institutional Research Clark University Shuqin Guo Coordinator of Evaluation and Research Walden University 8:30 - 9:15 am Chartiers - Mezzanine Level Edward J. Torpy Sales Engineer SPSS Inc. 8:30 - 9:15 am Traders - Mezzanine Level John L. Yeager Associate Professor/Administrative and Policy Studies Department University of Pittsburgh Glenn M. Nelson R. Tony Eichelberger 8:30 - 9:15 am Duquesne - Lobby Level Ann H. Dodd Senior Consultant, Center for Quality and Planning Carol Everett Associate Director, Center for Quality and Planning Conference Program Using Multiple Projection Models to Fit Different Student Populations Enrollment projection is becoming one of the major tasks in institutional research. Developing a best-fitting enrollment projection model has been a major challenge for IR researchers. This panel will discuss the pros and cons of three different projection models used in three different types of institutions. Three different enrollment projection models in Excel will also be shared during this panel discussion. Panel SPSS Answer Tree and Clementine for Data Mining Data mining (the process of discovering meaningful new information in large amounts of data) will be introduced, including a discussion of how it differs from traditional statistics. A demonstration of SPSS’ leading data mining products (Answer Tree and Clementine) will illustrate the benefits of data mining to institutional researchers. Vendor Showcase The Development and Utilization of a School Benchmarking System for Management Improvement This is a description of a four-year school benchmarking project to improve school management. The development of the school level process, data requirements and collection issues and utilization issues are discussed. The data requirements and utility of this process are also examined from a department perspective. Research Paper Measuring Quality Improvement: A Scorecard Approach As teamwork becomes an integral part of the way we do our work, it is critically important to be able to measure the success of team initiatives. The presenters will provide information about Penn State’s Quality Scorecard and team database, a unique approach to measuring and sharing the results of teamwork. Workshare Dan Nugent Management Information Associate Pennsylvania State University 249 NEAIR 27th Annual Meeting Monday, November 6th, 2000 8:30 - 9:15 am King's Terrace Kathleen Keenan Director of Institutional Research Massachusetts College of Art 9:25 - 10:10 am Brigade - Mezzanine Level Janet Nickels Office of Institutional Research Carroll Community College Barbara Livieratos, Howard Community College Bob Lynch, Montgomery College Koosappa Rajesekhara, Community College of Baltimore County 9:25 - 10:10 am Rivers - Mezzanine Level David Brodigan GDA Research 9:25 - 10:10 am Traders - Mezzanine Level Mary Louise Gerek Institutional Research Analyst Phyllis Ladrigan Professor of Psychology Nazareth College Conference Program Getting Started in Financial Aid Research This workshare will present some strategies employed by an institutional research office to improve the quality and availability of financial aid data for public information and institutional planning at a small public college. The discussion will include general and technical issues, analytic procedures, and results of some specific projects. Workshare We Know What They Did Last Summer: A Survey of Summer Students at Four Community Colleges Students enrolled in summer courses at four Maryland community colleges were surveyed about their opinions and perceptions of the college, and their coursescheduling preferences. Analysis focused on those students who normally attend four-year institutions during the regular academic year and their comparison of the community college with their “home” institution. Research Paper The Colleges Students Choose and How They Decide Data from surveys conducted over the last five year for two dozen colleges and universities have been combined into a single database that has yielded new insights into the thinking of prospective college students as they choose among six different categories of colleges and universities. What kinds of students choose the most selective liberal arts colleges, other liberal arts colleges, large private research universities, smaller private universities, public flagships, and regional public colleges and universities? What kinds of institutions are in competition with each other and for which students? Workshare In-Class Projects: Using Students to Increase IR Resources To assist a Classroom Utilization CQI (Continuous Quality Improvement) team in determining and planning optimal instructional space utilization, the students in an Environmental Psychology course inventoried 40 available classrooms on campus as a term project. This is a case study of cooperation between the IR Office, administrative offices, faculty, and students to build a creative solution to a shortage of person power. Workshare 250 NEAIR 27th Annual Meeting Monday, November 6th, 2000 9:25 - 10:10 am Chartiers - Mezzanine Level Victor Berutti Vice President, Products Principia Products, Inc. 9:25 - 10:10 am Duquesne - Lobby Level Michelle Appel Director of Institutional Research Carroll Community College Craig Clagett Vice President, Planning, Marketing and Assessment 10:10 - 10:30 am Ballroom 4 10:30 - 11:10 am Brigade - Mezzanine Level Ellen Kanarek Vice President Applied Educational Research, Inc. 10:30 - 11:10 am Traders - Mezzanine Level Karen W. Bauer Assistant Director of Institutional Research and Planning University of Delaware, NEAIR Past-President Conference Program Remark Product Demonstration Principia will demonstrate and discuss software tools used by IR professionals to quickly and economically capture data for their research studies. The Remark Office OMR, Remark Web Survey, and Remark Classic OMR software will be demonstrated during this session. These products are widely used in IR departments to capture data from both paper and web-based surveys. Vendor Showcase What’s Happening in the Classroom? Using Information about the Teaching and Learning Environment in Institutional Effectiveness Assessment Assessing the teaching and learning environment requires not only outcomes assessment but also assessment of the processes by which outcomes are achieved. This paper describes a survey which collected data, section by section, on instructional methods, course requirements, and assessment methodologies. This information was integrated into the institutional assessment plan. Research Paper Break Developing a Web Version of the College Board’s Admitted Student Questionnaire This workshare will discuss a pilot effort to translate the College Board’s ASQ onto the Web. Each of the three pilot colleges experienced different problems. The discussion will cover the most challenging aspects of developing the survey itself, as well as issues that arose once the site went live. Workshare Select Findings from the UDAES Longitudinal Study This presentation describes the research design and select findings from the longitudinal study, UDAES, University of Delaware Academic Experiences Study. Funded through the National Science Foundation, this project examines the effectiveness of the Undergraduate Research program and its educational effects on students and faculty. Finding related to student demographics and growth will be shared. Research Paper 251 NEAIR 27th Annual Meeting Monday, November 6th, 2000 10:30 - 11:10 am Rivers - Mezzanine Level David Wright Associate Professor Marsha Krotseng Vice Provost West Liberty State College, Former AIR President 10:30 - 11:10 am King's Terrace - Mezzanine Arthur Kramer Director of Institutional Research New Jersey City University 10:30 - 11:10 am Chartiers - Mezzanine Level Michael J. Strada FACDIS Co-Director and Professor West Virginia University Conference Program Assessing Outcomes for School of Business Majors Using a Primary Trait Analysis This paper discusses the development and implementation of a student outcomes assessment program for School of Business Administration majors at a public baccalaureate institution. Specifically, it describes the creation and successful use of a Primary Trait Analysis instrument during a six-month period. Highlights include a description of the process, findings from the pilot, lessons learned, and recommendations. Research Paper Creation of a Scale to Measure Faculty Development Needs and Motivation to Participate in Development Programs. This paper discusses a faculty survey. Faculty were surveyed to: 1) Assess satisfaction with current development activities and policies; and 2) Establish a foundation for a scale to assess factors that motivate faculty to participate in development activities. Results revealed general satisfaction and a factor concerned with administrative recognition and communication of faculty achievement. Research Paper Assessing a Decade of Assessment and Faculty Resistance to it The Institutional Research literature includes the belief that assessment works best when faculty-driven. However, exclusive reliance on “hard data” to measure student “outcomes” fails (in the eyes of most instructors) to satisfy their concerns about relevance, validity, and significance. More attention to the ancillary role of “soft data,” as well as the assessment of pedagogical “process and content” – in addition to standard pedagogical “outcomes” – can enhance faculty confidence in assessment. And where should this quest for “soft data,” plus pedagogical “process and content” begin? With the misunderstood course syllabus as a rich source of “soft data.” Research Paper 252 NEAIR 27th Annual Meeting Monday, November 6th, 2000 10:30 - 11:10 am Duquesne - Lobby Level Kathleen Rottier Senior Research Analyst College of Southern Maryland Yun Kim Office, Planning and Research College of Southern Maryland Conference Program Getting Hit with an IT System Change and Surviving the Impact on Institutional Research Functions Seven crises that had to be overcome by institutional researcher in order to survive “The System Change” are the focus of this workshare. Concrete strategies to assess reliability, complete mandated reports, overcome security challenges, and continue institutional research activities during an information system change will be discussed. Workshare Gayle Fink Director Planning and Research Anne Arundel Community College Oyebanjo Lajubutu Director of Institutional Research Harford Community College Jean Frank Senior Research Analyst Howard Community College 11:20 - noon Chartiers - Mezzanine Level Mitchell S. Nesler Director of Research, Academic Programs Amanda M. Maynard Regents College 11:20 - noon Brigade - Mezzanine Level Linda Strauss Director, Penn State Learning Edge Academic Program Penn State University J. Fredericks Volkwein The Pennsylvania State University, NEAIR President Curriculum Review at a Virtual University: An External Faculty Panel Approach Measuring program effectiveness is an important part of ensuring academic excellence in higher education, especially for institutions serving students at a distance. This paper presents the Regents College model for reviewing curriculum structure and program objectives, in the context of Biology. Process, challenges, and outcomes will be discussed. Research Paper Institutional Influence on Student Learning and Growth: A Response to Accountability and Accreditation Forces in Two and Four Year Institutions Pascarella’s (1985) General Causal Model serves as a conceptual framework to examine the institutional characteristics and environments contributing to student learning and growth at two and four year institutions. The study utilizes a multicampus database with 8,405 students. Student learning is measured through self-perceptions and faculty perceptions (cumulative grade point average). Research Paper 253 NEAIR 27th Annual Meeting Monday, November 6th, 2000 11:20 - noon Rivers - Mezzanine Level Corby A. Coperthwaite Director of Planning, Research and Assessment Marcia Jehnings Director, Social Sciences Division Manchester Community College 11:20 - noon Traders - Mezzanine Level Gary Choban Vice President Innervate 11:20 - noon King's Terrace - Mezzanine Level Kenneth R. Ostberg Regional Director National Student Loan Clearinghouse 11:20 - noon Duquesne - Lobby Level Carol Trosset Director of Institutional Research Grinnell College Noon - 2:00 pm Ballroom 1 – Mezzanine Level Conference Program Implementing a Program of Outcomes Assessment in the Land of Steady Habits For years this community college talked about assessment and finally, within the last two years, learning outcomes for General Education, Student Affairs, and all Academic Programs have emerged. Course based and portfolio assessments are being piloted. What changed? How did it happen? Where will the College go from here? Workshare Facilitating the Use of Assessment Data and Documenting Program Impact – A Software Solution TracDat – a flexible software solution for managing the academic assessment process. For an assessment program to be effective, all phases of the assessment process must be addressed. TracDat is a software solution that provides academic departments with an efficient and reliable mechanism for managing the assessment process. Vendor Showcase Using Enrollment Search to Enhance Effectiveness Institutional researchers can now use Enrollment Search to study the migratory patterns of applicants for admission and ex-students as they move through the higher education system. Vendor Showcase Using Qualitative Analytical Methods for Institutional Research Statistical analysis is the stock-in-trade for institutional research, but the field can also benefit from qualitative methods. Trosset, a cultural anthropologist, will share several qualitative analyses from her work at Grinnell College, explain the techniques involved, and discuss ways in which these methods can enhance research efforts. Research Paper Luncheon and Business Meeting 254 NEAIR 27th Annual Meeting Monday, November 6th, 2000 2:00 - 3:30 pm Brigade - Mezzanine Level Stephen Thorpe Assistant Provost Drexel University Jim Fergerson Director of Institutional Planning and Analysis Bates College Conference Program Online vs. Paper Surveys: A Comparison of Methodologies The use of online surveys vs. traditional paper methods is becoming an increasingly popular approach for campusbased research activities. The panelists, each of whom have conducted several online studies, will discuss the advantages and disadvantages of web-based surveys, and their campus-based findings of similarities and differences in response rates and potential response bias. Panel Mark Palladino Research Specialist Drexel University John Pryor Director of Undergraduate Evaluation and Research Dartmouth College 2:00 - 2:40 pm Chartiers - Mezzanine Level Tuan Dang Do Assistant Director, Institutional Research Robert Yanckello Director, Institutional Research Central Connecticut State University 2:00 - 2:40 pm Rivers - Mezzanine Level Anne Marie Delaney Director of Institutional Research Babson College, NEAIR President-Elect Visual IPEDS The purpose of this presentation is to describe our progress in using object-oriented languages (especially Visual Basic) to create programs to automatically complete IPEDS reports (enrollment, age, residence, undergrad transfer, residence of first time students and credit hours, so far). This user-friendly interface tool will eliminate many hours of work in IR offices. Workshare Institutional Researchers: Challenges, Resources and Opportunities This paper presents the results of a study that investigated challenges institutional researchers encounter in their career; resources for coping with these challenges; and the impact of these challenges on engagement in policy. Results identify concern about the amount of work, limited opportunity for advancement, and producing quality work within time constraints as the most prevalent challenges. However, those who have a mentor, a strong professional network and an independent job structure can more effectively meet such challenges and actively engage in policy development. Research Paper 255 NEAIR 27th Annual Meeting Monday, November 6th, 2000 2:00 - 2:40 pm Traders - Mezzanine Level Emily Thomas Director of Planning and Inst. Research Douglas Panico Director of Management Analysis & Audit, SUNY Stony Brook 2:00 - 2:40 pm Duquesne - Lobby Level Tracy Polinsky Coordinator of Institutional Research Butler County Community College 2:50 - 3:30 pm Duquesne - Lobby Level James Robertson Assistant Director, Planning and Institutional Research Community College of Allegheny College Julia Peters 2:50 - 3:30 pm Traders - Mezzanine Level Sandra Price Director of Institutional Research Keene State College Dawn Geronimo Terkla Executive Director, Institutional Research Tufts University 2:50 - 3:30 pm Chartiers - Mezzanine Level Donald A. Gillespie Director of Institutional Research Fordham University Conference Program Financial and Performance Profiles of Academic Departments This workshare will describe how we created academic department profiles that include their resources, their outputs, and an analysis of their financial contribution to the university. We will present our profile, discuss how the data are used, and describe how we solved methodological and technical problems. Workshare The IR-CQI Connection "Quality" has been stimulating self-evaluation, creative thinking, and change at institutions for years. Because quality efforts are data based and assessment dependent, they are appropriate projects for institutional researchers. By providing data and encouraging systematic evaluation, they can help their colleges to successfully implement quality efforts at their institutions. Research Paper End of Month Reporting at CCAC In switching from legacy to Datatel, CCAC lost all reporting infrastructure, which Institutional Research needed to re-create. This paper describes the end of month reporting process for creating various enrollment comparisons. Anyone who does reporting may be interested. Included are queries, SPSS syntaxes, sample Excel worksheets and PDF outputs. What Would You Do? Ethical Scenarios Illustrating AIR's Code of Ethics AIR's Code of Ethics is in the process of being revised. Members of AIR's Task Force on Ethics will present a series of scenarios depicting ethical dilemmas. Following the each scenario the audience will be asked to discuss several questions regarding the dilemma using the Code as reference. Workshare Results of an Exploratory Survey of the Staffing and Responsibilities of Institutional Research Offices This workshare will present the results of an exploratory survey of staffing patterns and responsibilities of institutional research offices at selected Catholic institutions and plans for a survey of a full range of US colleges that might examine the amount of time spent on major institutional research tasks. 256 NEAIR 27th Annual Meeting Monday, November 6th, 2000 2:50 - 3:30 pm Brigade - Mezzanine Level Cherry Danielson University System of New Hampshire 2:50 - 3:30 pm Rivers - Mezzanine Level Monica E. Randall Associate Director of Policy Analysis and Research Maryland Higher Education Commission Geoffrey Newman Finance Policy Analyst Maryland Higher Education Commission Elissa Klein Research Director Maryland Association of Comm. Colleges 3:45 - 4:45 pm Ballroom 4 - Mezzanine Level Tom Mortenson Post Secondary Opportunity Conference Program Change Leadership and the Implications of Culture The last twenty years have been riddled with various types of change as colleges and universities attempt to position themselves for survival and success. While institutions have designed strategies for change, the role of leaders in the process and their ability to affect outcomes has been laden with high expectations. Thus, the relationship between leadership and change has emerged as a key juncture for scholarly consideration. This literature review synthesizes theoretical models, empirical studies, and anecdotal writings that address issues of change and leadership emanating from both Organization and Higher Education literature. Facilities Planning In the 21st Century: Developing Continuous Education Enrollment Projections For Maryland’s Community Colleges The purpose of this workshare is to discuss the progress that Maryland has made in the development of a methodology for projecting noncredit continuing education enrollments at Maryland’s community colleges. The workshare presenters will discuss the history of the development of continuing education enrollment projections; the methodology for projecting eligible noncredit enrollments; and the policy issues related to the development of this model. This workshare will appeal to those interested in projecting noncredit continuing education enrollments and at those interested in facilities planning. Workshare Higher educational opportunity in the human capital economy ! The human capital economy (income by educational attainment) ! Social and private investment in human capital ! The distribution/redistribution of higher education by family income over the last three decades in the U.S 257 NEAIR 27th Annual Meeting Monday, November 6th, 2000 Conference Program 5:00 - 6:00 pm Ballroom 3 – Mezzanine Level Happy Hour (meet friends and make dinner plans) Concurrent Table Topics and Special Interest Groups Kit Mahoney, CIRP Survey Coordinator-UCLA's Higher Education Research Institute Using the CIRP Surveys for Student Assessment Colleges can collect valuable baseline data on their entering students using the Cooperative Institutional Research Program (CIRP) Freshman/Entering Student Survey. By following-up these same students later with the College Student Survey, colleges accumulate comprehensive data on their students. A growing number of colleges are using these data for accreditation self-studies; satisfying statemandated performance measures and monitoring the impact of college on students. The discussion will cover practical considerations of using the combination of CIRP/CSS for longitudinal assessment. Mark Zidzik Director, Research Development Peterson’s The Baby and the Bath Water: What Data Are Important When Profiling Graduate and Professional Programs Given the different perspectives of data providers and collectors and information suppliers and users, the question of what data are most important when researching postbaccalaureate study opportunities has many answers. This table topic, facilitated by Research staff from Peterson’s, will feature discussion of the relative merits of data that are collected in each of the following areas: enrollment, faculty, research, degrees, academic subject areas, requirements, completions, and financial aid. Rocco Russo Vice President of Research Peterson’s Valerie S. Rogers Assistant to Director, Office of Institutional Research University of Connecticut Selecting Peer Institutions With the recent changes in the Carnegie Classifications this table topic will discuss a University’s process in re-defining its peer base institutions. What is an appropriate number of peers? What factors should be considered when defining a peer group? Others are encouraged to share their experiences in peer selection Pam Roelfs Director of Institutional Research University of Connecticut Performance Indicators: The Good, The Bad, and The Ugly General discussion of indicators of effectiveness, efficiency, and “success” for colleges and universities will be the main purpose of this table topic. Which performance indicators are good? Which ones are bad? How ugly have been the definition, application, measurement, and interpretation of them? Discussion will focus on indicators used in institutional comparisons. 258 NEAIR 27th Annual Meeting Monday, November 6th, 2000 Christopher Hourigan Assistant Director, Planning, Research and Evaluation William Paterson University Conference Program Collaboration between Institutional Research and Academic Departments In addition to providing information and analysis to the administration regularly, institutional researchers can also help an institution to work towards its mission by serving as a resource for academic departments. This table topic will be a discussion about how institutional research offices can make valuable contributions to academic departments and will feature examples of the work that the Office of Planning, Research and Evaluation has done for the academic departments at William Paterson University. Special Interest Groups Jason Casey Brigade - Mezzanine Level HEDS Linda Junker Traders – Mezzanine Level Catholic Colleges Peter Parnell Rivers - Mezzanine Level SUNYAIRPO 259 NEAIR 27th Annual Meeting Tuesday, November 7th, 2000 7:15 - 8:30 a.m. Ballroom 4 - Mezzanine Level 8:00 - 8:40 am Traders - Mezzanine Level Robert K. Toutkoushian Executive Director, Office of Policy Analysis University System of New Hampshire 8:00 - 8:40 am Rivers - Mezzanine Level Dawn Geronimo Terkla Executive Director, Institutional Research Tufts University Gordon J. Hewitt Assistant Director of Institutional Research Tufts University 8:00 - 8:40 am Brigade - Mezzanine Level Kelli Armstrong Director of Institutional Research UMass President’s Office Becky Brodigan Director of Institutional Research Middlebury College 8:00 - 8:40 am Traders - Mezzanine Level Meihua Zhai Director of Institutional Research, Office of Planning & Analysis Jennie Skirl Associate Dean of Arts & Sciences West Chester University of PA Conference Program Continental Breakfast sponsored by George Dehne & Associates A Comparison of Faculty in Regular Versus NonRegular Academic Positions This study uses data from the NSOPF:93 national survey of faculty to examine the satisfaction and relative compensation of faculty employed in regular versus nonregular academic positions. For the purpose of this study, faculty are broken into four categories (tenure/tenure-track vs. non-tenured, full-time vs. part-time). Descriptive statistics and multivariate regression techniques are then used to compare faculty in these groups on the basis of their background characteristics, satisfaction with academic employment, and compensation. Research Paper New Technology and Student Interaction with the Institution This paper examines how prospective students as well as current undergraduates are using electronic communication to interact with various campus constituencies. Findings show that students extensively use e-mail and IRC to communicate with friends and colleagues, but the use of these mediums – as well as other interactive Web-based mediums – to communicate with faculty and staff and obtain admissions information is much less. Research Paper Keeping It Private or Bringing It Public: Careers in IR Have you ever wondered what it was like to work “on the other side?” Sessions at institutional research conferences are often divided among public and private institution lines. Hot issues that are pressing for colleagues on public campuses may not be so for institutional researchers at private colleges (and vice versa.) This session is designed to be an open discussion about career paths in institutional research. The panelists will speak from personal experiences about crossing the border between private and public institutions, and moving into areas beyond traditional institutional research work. Workshare The Impact of Remedial English Courses on Student College-Level English Performance and Persistence The impact of remedial English class on student persistence and performance in their college-level English was studied. Retention rates and percentages of students who passed their college-level English were compared between remedial and non-remedial course takers whose SATV were below 500 (550 after recentering). Student course grades from fall 1992 to spring 200 were used in this study. Research Paper 260 NEAIR 27th Annual Meeting Tuesday, November 7th, 2000 8:00 - 8:40 am Duquesne - Lobby Level David X. Cheng Assistant Dean for Research and Planning University 8:50 - 9:30 am Brigade - Mezzanine Level Ronald Zaccari President West Liberty State College 8:50 - 9:30 am King's Terrace - Mezzanine Level Michael J. Dooris Director, Planning, Research & Assessment, Center for Quality & Planning Louise E. Sandmeyer Executive Director, Center for Quality and Planning Pennsylvania State University 8:50 - 9:30 am Rivers - Mezzanine Level Mitchell S. Nesler Director of Research, Academic Program Regents College Roy G. Gunnarsson Conference Program Student Self-Perceived Gain Scales as the Outcome Measures of Collegiate Experience This study attempts to articulate student collegiate experience using self reports and to construct the gain scales that can be used as the outcome measures in an institution’s overall assessment efforts. Research Paper A Presidential Conversation: Collaborating for Change Working together, institutional researchers and presidents can provide a solid force for change and enliven the strategic planning and management of their colleges and universities. This dialogue between institutional researchers and a college president will explore ways to foster such opportunities and consider a variety of issues, including how Institutional researchers can creatively assist presidents and ways in which presidents can effectively employ their institutional research offices. a public baccalaureate institution over a four-year period. Numerous changes resulting from the plan are highlighted. Faculty & Staff Surveys: Insight for Improvement At Penn State, university, college, and department improvement efforts can draw from a centrally assembled package of tools – such as surveys and exit interviews – to gain insight into faculty and staff opinion. The presenters will share examples from Penn State, and invite participants to discuss approaches at their institutions. Workshare What Facilitates or Inhibits Adults from Participating in Adult Education? An Analysis of the National Household Education Survey. This study was designed to examine the self-reported barriers adults face to accessing adult education, their motivations for participating in adult education, and the demographic characteristics associated with these factors. NHES:95 data were analyzed to address these questions. Research Paper 261 NEAIR 27th Annual Meeting Tuesday, November 7th, 2000 8:50 - 9:30 am Traders - Mezzanine Level Tsuey-Ping Lee Assistant for Institutional Research University at Albany, SUNY Chisato Tada International Student Advisor University at Albany, SUNY 8:50 - 9:30 am Chartiers - Mezzanine Level Kevin B. Murphy Institutional Research Analyst University of Massachusetts, Boston 8:50 - 9:30 am Duquesne - Lobby Level Karl Boughan Coordinator of Institutional Research Prince George’s Community College 9:40 - 10:20 am Brigade - Mezzanine Level Marsha V. Krotseng Vice Provost Ronald Zaccari President West Liberty State University Conference Program To Show How We Care: Combining Web-Based Technology and International Student Needs Assessment The purposes of this research are to assess international student needs and to experiment with web-based survey techniques. This research paper not only analyzes the results based on the degree level of international students, cultural background, academic major and length of stay in US, but also details the basic survey research issues and complexities of conducting a web-based, and traditional paper surveys. This study will present the detailed survey processes, the data, the research results and the application of the results. Research Paper Developing an Analysis of Outcomes for the Writing Proficiency Requirement This is a case study of the process of developing an analysis of outcomes for the writing proficiency requirement. It will focus on the role of the institutional researcher in question formulation, identifying what is currently feasible, and preparing to better answer the question in the future. Research Paper Through the Development Maze: Remedial Program Complexity and Student Progress at a Large, Suburban Community College Unlike most past developmental program research emphasizing the external correlates of remedial success, this community college case study focuses instead on program configuration and its interaction with the credit instructional process and new student expectations of college. Cluster analysis is used to clarify the tangled web of forces at work, sorting a cohort of recent fall-entering remedial students into discrete “developmental strategy” groups, each representing a unique set of student behavioral responses to the remedial process and a unique remediation outcome pattern. Research Paper The Transformational Power of Strategic Planning Strategic planning is vital to the effective management of colleges and universities. It also is integral to institutional change. This case study demonstrates the critical connection between strategic planning and institutional transformation by tracing the strategic planning process for a public baccalaureate institution over a four-year period. Numerous changes resulting from the plan are highlighted. Research Paper 262 NEAIR 27th Annual Meeting Tuesday, November 7th, 2000 9:40 - 10:20 am Rivers - Mezzanine Level Richard J. Reeves Senior Research and Planning Associate Cornell University 9:40 - 10:20 am Chartiers - Mezzanine Level Stephen R. Porter Director of Institutional Research, Wesleyan University Paul D. Umbach Graduate Research Assistant University of Maryland, College Park 9:40 - 10:20 am Chartiers - Mezzanine Level Robert Morse US News and World Report Peggye Cohen George Washington University Moderator 10:30 - noon Ballroom 3 - Mezzanine Level Dawn Geronimo Terkla, Incoming AIR President and Executive Director of Institutional Research, Tufts University Conference Program Data Mining Basics: What is it and why use it? Intended for institutional researchers interested in developing their own data-mining system, this presentation will briefly cover the following topics: what research methods constitute data-mining, how it can be used to improve enrollment management, a brief comparison of data-mining to traditional statistics, and the evolution of data-mining. The presenter will then discuss the components (technology and personnel) necessary to create a functional data-mining system. Workshare We Can’t Get There in Time: Assessing the Time between Classes and Classroom Disruptions This workshare describes and analyzes the time between classes problem at the University of Maryland. Using facilities and course scheduling data in combination with student survey data, we discovered that many students had distances to travel between classes that take longer than the allotted ten minutes. The survey indicated that students reacted by leaving class early and skipping class altogether. Reasons for having such a class schedule ranged from problems registering for a particular course to a desire for a compact schedule. Workshare The U.S. News College Rankings A detailed explanation and discussion of the methodology changes made in the "America's Best Colleges" rankings published on September 1, 2000. U.S. News views on the September 2000 Washington Monthly article "Playing With Numbers." An opportunity to ask questions about the rankings. What's Happening in Washington: An update on Institutional Research Issues from a National Perspective Members of various NPEC and AIR committees will report on the latest happenings regard Student Outcomes, College Costs and a variety of other issues. Plenary Session Jennifer Brown, Director of Institutional Research and Policy Studies, University of Massachusetts, Boston Mark Putnam, Director of University Planning and Research, Northeastern University and Chair, NPEC Committee on College Costs 263