Nothing Special   »   [go: up one dir, main page]

Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Essentials of Cross-Battery Assessment
Essentials of Cross-Battery Assessment
Essentials of Cross-Battery Assessment
Ebook885 pages6 hours

Essentials of Cross-Battery Assessment

Rating: 3 out of 5 stars

3/5

()

Read preview

About this ebook

The most up-to-date resource of comprehensive information for conducting cross-battery assessments

The Cross-Battery assessment approach—also referred to as the XBA approach—is a time-efficient assessment method grounded solidly in contemporary theory and research. The XBA approach systematically integrates data across cognitive, achievement, and neuropsychological batteries, enabling practitioners to expand their traditional assessments to more comprehensively address referral concerns. This approach also includes guidelines for identification of specific learning disabilities and assessment of cognitive strengths and weaknesses in individuals from culturally and linguistically diverse backgrounds.

Like all the volumes in the Essentials of Psychological Assessment series, Essentials of Cross-Battery Assessment, Third Edition is designed to help busy practitioners quickly acquire the knowledge and skills they need to make optimal use of psychological assessment instruments. Each concise chapter features numerous callout boxes highlighting key concepts, bulleted points, and extensive illustrative material, as well as test questions that help you to gauge and reinforce your grasp of the information covered.

Essentials of Cross-Battery Assessment, Third Edition is updated to include the latest editions of cognitive ability test batteries , such as the WISC-IV, WAIS-IV, and WJ III COG, and special purpose cognitive tests including the WMS-IV and TOMAL-II. This book now also overs many neuropsychological batteries such as the NEPSY-II and D-KEFS and provides extensive coverage of achievement batteries and special purpose tests, including the WIAT-III, KM-3, WRMT-3 and TOWL-4. In all, this book includes over 100 psychological batteries and 750 subtests, all of which are classified according to CHC (and many according to neuropsychlogical theory. This useful guide includes a timesaving CD-ROM, Essential Tools for Cross-Battery Assessment (XBA) Applications and Interpretation, which allows users to enter data and review results and interpretive statements that may be included in psychological reports.

Note: CD-ROM/DVD and other supplementary materials are not included as part of eBook file.

LanguageEnglish
PublisherWiley
Release dateMar 6, 2013
ISBN9781118234563
Essentials of Cross-Battery Assessment
Author

Dawn P Flanagan

Dr. Dawn P. Flanagan is professor of Psychology and Director of the School Psychology training programs at St. John's University in Queens, NY. She is also Clinical Assistant Professor at Yale Child Study Center, Yale University School of Medicine. In addition to her teaching responsibilities in the areas of intellectual assessment, psychoeducational assessment, learning disability, and professional issues in school psychology, she serves as an expert witness, learning disability consultant, and psychoeducational test/measurement consultant and trainer for organizations both nationally and internationally.

Related to Essentials of Cross-Battery Assessment

Titles in the series (28)

View More

Related ebooks

Psychology For You

View More

Related articles

Reviews for Essentials of Cross-Battery Assessment

Rating: 3 out of 5 stars
3/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Essentials of Cross-Battery Assessment - Dawn P Flanagan

    Series Preface

    In the Essentials of Psychological Assessment series, we have attempted to provide the reader with books that will deliver key practical information in the most efficient and accessible style. The series features instruments in a variety of domains, such as cognition, personality, education, and neuropsychology. For the experienced clinician, books in the series offer a concise yet thorough way to master utilization of the continuously evolving supply of new and revised instruments as well as a convenient method for keeping up to date on the tried-and-true measures. The novice will find here a prioritized assembly of all the information and techniques that must be at one's fingertips to begin the complicated process of individual psychological diagnosis.

    Wherever feasible, visual shortcuts to highlight key points are utilized alongside systematic, step-by-step guidelines. Chapters are focused and succinct. Topics are targeted for an easy understanding of the essentials of administration, scoring, interpretation, and clinical application. Theory and research are continually woven into the fabric of each book, but always to enhance clinical inference, never to sidetrack or overwhelm. We have long been advocates of intelligent testing—the notion that a profile of test scores is meaningless unless it is brought to life by the clinical observations and astute detective work of knowledgeable examiners. Test profiles must be used to make a difference in the child's or adult's life, or why bother to test? We want this series to help our readers become the best intelligent testers they can be.

    The most exciting new feature of the third edition of Essentials of Cross-Battery Assessment is the improved psychometric foundation upon which the approach is based, as summarized in Chapter 1. For example, cross-battery composites are based on relevant formulas instead of rules of thumb. Also, the software programs on the CD are superb. Each of the three programs from the second edition was expanded and revised extensively. The Cross-Battery Assessment Data Management and Interpretive Assistant (XBA DMIA v2.0) includes over 100 cognitive, achievement, and neuropsychological batteries and 750 subtests. It contains several new features that make program navigation simple and interpretation of test data within the context of CHC theory comprehensive and efficient.

    The SLD Assistant program from the second edition was substantially revised and expanded and was renamed Pattern of Strengths and Weaknesses Analyzer (PSW-A v1.0). This program has a number of features that aid practitioners in identifying and diagnosing specific learning disabilities (SLD). Rather than relying on a traditional discrepancy analysis, the PSW-A provides a sophisticated synthesis of cognitive strengths, cognitive deficits, and academic deficits. The methods used to analyze an individual's pattern of strengths and weaknesses for the purpose of SLD identification are grounded in CHC ability–achievement relations research and are psychometrically sound. The program is easy to use and will prove to be a valuable resource to practitioners.

    The third program on the CD is the Culture-Language Interpretive Matrix (C-LIM v2.0). This program evaluates data from standardized norm-referenced tests to determine the relative influence of English-language proficiency and level of acculturation on test performance. The C-LIM v2.0 provides a systematic method that facilitates evaluation of cultural and linguistic factors that may be present in the evaluation of individuals from diverse backgrounds. This version of the C-LIM has been revised to allow for the evaluation of culture and language on test performance separately, which expands the utility of the program to speech-language pathologists, for example. In addition, the program allows for an evaluation of culturally and linguistically diverse individuals who function in the high-average and gifted ranges of ability.

    This third edition of Essentials of Cross-Battery Assessment includes numerous appendices that extend beyond CHC theory. For example, Appendix G provides neuropsychological domain classifications of all subtests from pertinent cognitive and neuropsychological batteries. And this edition features multiple case reports written by well-respected, expert clinicians from across the country that demonstrate the utility of the authors’ interpretation methods and programs. Unlike previous editions of this book, the third edition thoroughly covers a much wider range of ability measures, including cognitive, academic, and neuropsychological batteries. Crafted by the international leaders in cross-battery assessment, this book is truly an essential resource for examiners from diverse clinical backgrounds.

    Alan S. Kaufman, PhD, and Nadeen L. Kaufman, Ed.D., Series Editors

    Yale Child Study Center, Yale University School of Medicine

    Acknowledgments

    We are deeply indebted to Agnieszka Dynda, who assisted with the programming of the PSW-A v1.0, the XBA DMIA v2.0, and C-LIM v2.0. Agnieszka also worked on, edited, and formatted just about all of the numerous tables, figures, rapid references, and appendices included in this book. Without her expertise, attention to detail, and unwavering assistance, patience, and support, including her much-appreciated hospitality and caretaking during our collective sleepovers, this book would not have made it to production. We are also deeply appreciative of our colleagues Gail Cheramie, Jim Hanson, John Garrutto, and Karen Apgar, who provided us with examples of their knowledge and expertise in the form of psychological reports. Gail, Jim, John, and Karen skillfully demonstrated the utility of the methods and programs espoused in this book. In addition, we thank our colleagues Marlene Sotelo-Dynega and Jennifer T. Mascolo, as well as our graduate assistants Tara Cuskley and Shauna Dixon, who prepared important appendices packed with valuable information about all 759 subtests included in our book—appendices that practitioners will undoubtedly find invaluable in the test interpretation process. We also thank Robert Misak for his continued support of and contribution to the ideas underlying the PSW-A v1.0 program, particularly the g-Value component of the program.

    We also extend a heartfelt and sincere thank-you to our colleagues and friends in Arizona, Christina Hanel and Larry (Laris) Pristo, for the countless hours they spent beta testing our software programs. They have jokingly made it clear that it is likely impossible for us to ever repay them for their efforts. We will certainly try! Finally, a number of our graduate students, especially Sabrina Ismailer, Alla Zhelinsky, and Sara Douglas, have devoted their time to this book, ordering and organizing tests, conducting literature reviews, devising Test Yourself questions, and ensuring that any and all information we needed was delivered accurately and in a timely fashion. Additionally, we extend a special thank you to those students and colleagues who assisted us at the last minute with various editorial tasks—Rachel Larrain, Michael Klein, and Jamie Ristaino.

    There are several other individuals who deserve special mention, particularly staff members at Wiley including Marquita Flemming, Sherry Wasserman, and Kim Nir. We are truly grateful for their unwavering support, attention to detail, and editorial expertise during the production of this book!

    And finally, on a personal note, a heartfelt thank you to Annie, for all the times she picked up Megan and spent a long Saturday or Sunday with her having fun and being kids, which allowed us to focus on the book, each time bringing us just a little bit closer to completion. Her willingness to give generously of herself and her time is so very much appreciated.

    Chapter One

    Overview

    ¹

    The Cross-Battery Assessment approach (hereafter referred to as the XBA approach) was introduced by Flanagan and her colleagues over 15 years ago (Flanagan & McGrew, 1997; Flanagan, McGrew, & Ortiz, 2000; Flanagan & Ortiz, 2001; McGrew & Flanagan, 1998). The XBA approach is based on the Cattell-Horn-Carroll (CHC) theory (and now also integrated with neuropsychological theory). It provides practitioners with the means to make systematic, reliable, and theory-based interpretations of any ability battery and to augment that battery with cognitive, achievement, and neuropsychological subtests from other batteries to gain a more psychometrically defensible and complete understanding of an individual's pattern of strengths and weaknesses (Flanagan, Ortiz, & Alfonso, 2007). Moving beyond the boundaries of a single cognitive, achievement, or neuropsychological battery by adopting the rigorous theoretical and psychometric XBA principles and procedures represents a significant improvement over single-battery assessment because it allows practitioners to focus on accurate and valid measures of the cognitive constructs and neurodevelopmental functions that are most germane to referral concerns (e.g., Carroll, 1998; Decker, 2008; Kaufman, 2000; Wilson, 1992).

    Don't Forget

    The XBA approach provides practitioners with the means to make systematic, reliable, and theory-based interpretations of ability batteries and to augment them with cognitive, achievement, and neuropsychological tests from other batteries to gain a more defensible and complete understanding of an individual's pattern of strengths and weaknesses.

    According to Carroll (1997), the CHC taxonomy of human cognitive abilities "appears to prescribe that individuals should be assessed with respect to the total range of abilities the theory specifies (p. 129). However, because Carroll recognized that any such prescription would of course create enormous problems, he indicated that [r]esearch is needed to spell out how the assessor can select what abilities need to be tested in particular cases" (p. 129). Flanagan and colleagues’ XBA approach clearly spells out how practitioners can conduct assessments that approximate the total range of cognitive and academic abilities and neuropsychological processes more adequately than what is possible with any collection of co-normed tests.

    In a review of the XBA approach, Carroll (1998) stated that it can be used to develop the most appropriate information about an individual in a given testing situation (p. xi). In Kaufman's (2000) review of XBA, he said that the approach is based on sound assessment principles, adds theory to psychometrics, and improves the quality of the assessment and interpretation of cognitive abilities and processes. More recently, Decker (2008) stated that the XBA approach may improve school psychology assessment practice and facilitate the integration of neuropsychological methodology in school-based assessments [because it] shift[s] assessment practice from IQ composites to neurodevelopmental functions (p. 804). Finally, a recent listserv thread of the National Association of School Psychologists focused on the potential weaknesses of the XBA approach. In that thread, Kevin McGrew (2011, March 30) stated, In the hands of ‘intelligent’ intelligence examiners the XBA system is safe and sound.

    Noteworthy is the fact that assessment professionals crossed batteries long before Woodcock (1990) recognized the need and before Flanagan and her colleagues introduced the XBA approach. Neuropsychological assessment has crossed various standardized tests in an attempt to measure a broader range of brain functions than that offered by any single instrument (Hale & Fiorello, 2004; Hale, Wycoff, & Fiorello, 2011; Lezak, 1976, 1995; Lezak, Howieson, & Loring, 2004; see Wilson, 1992, for a review). Nevertheless, several problems with crossing batteries plagued assessment related fields for years. Most of these problems have been circumvented by Flanagan and colleagues’ XBA approach (see Table 1.1 for examples). But unlike the XBA approach, other various so-called cross-battery and flexible battery techniques applied within the fields of school psychology and neuropsychology are not grounded in a systematic approach that is theoretically and psychometrically sound. Thus, as Wilson (1992) cogently pointed out, the field of neuropsychological assessment is in need of an approach to guide practitioners through the selection of measures that would result in more specific and delineated patterns of function and dysfunction—an approach that provides more clinically useful information than one that is wedded to the utilization of subscale scores and IQs (p. 382).

    Table 1.1 Parallel Needs in Cognitive Assessment–Related Fields Addressed by the XBA Approach

    Source: Information obtained, in part, from Wilson (1992).

    Indeed, all fields involved in the assessment of cognitive and neuropsychological functioning have some need for an approach that would aid practitioners in their attempt to tap all of the major cognitive areas, with emphasis on those most suspect on the basis of history, observation, [current hypotheses] and on-going test findings (Wilson, 1992, p. 382; see also Flanagan, Alfonso, Ortiz, & Dynda, 2010; Miller, in press). Ever since publication of the first edition of Essentials of Cross-Battery Assessment (Flanagan & Ortiz, 2001), the XBA approach has met this need and it now provides practitioners with a framework that is based on more psychometrically and theoretically rigorous procedures than ever before. For those new to the approach, the definition of and rationale for XBA is presented next followed by a description of the XBA method. Figure 1.1 provides an overview of the information presented in this chapter.

    Figure 1.1 Overview of the XBA Approach

    Note: CHC = Cattell-Horn-Carroll

    XBA DMIA = Cross-Battery Data Management and Interpretive Assistant v2.0. This program automates the XBA approach and is found on the CD accompanying this book.

    Definition

    The XBA approach is a method of assessing cognitive and academic abilities and neuropsychological processes that is grounded in CHC theory and research and neuropsychological theory and research (e.g., Miller, 2007, 2010, 2013). It allows practitioners to measure a wider range (or a more in-depth but selective range) of ability and processing constructs than that represented by any given stand-alone assessment battery, in a reliable and valid manner. The XBA approach is based on four foundational sources of information that together provide the knowledge base necessary to organize a theory-driven, comprehensive assessment of cognitive, academic, and neuropsychological constructs.

    Don't Forget

    The XBA approach allows practitioners to reliably measure a wider range (or a more in-depth but selective range) of abilities than that represented by any single assessment battery.

    Foundation of the XBA Approach

    The foundation of the XBA approach rests, in part, on CHC theory and the broad and narrow CHC ability classifications of all subtests that comprise current cognitive, achievement, and selected neuropsychological batteries (i.e., tests published after 2000). CHC theory is discussed first, followed by a summary of the broad and narrow CHC ability classifications of tests. The fourth foundational source of information underlying the XBA approach—relations among cognitive abilities, neuropsychological processes, and academic skills—is discussed in Chapter 2.

    CHC Theory

    Psychometric intelligence theories have converged in recent years on a more complete or expanded multiple intelligences taxonomy, reflecting syntheses of factor analytic research conducted over the past 60 to 70 years. The most recent representation of this taxonomy is the CHC structure of cognitive abilities. CHC theory is an integration of Cattell and Horn's Gf-Gc theory and Carroll's three-stratum theory of the structure of cognitive abilities.

    Original Gf-Gc Theory and the Cattell-Horn Expanded Gf-Gc Theory: First Precursors to CHC Theory

    The original conceptualization of intelligence developed by Cattell in the early 1940s was a dichotomous view of cognitive ability and was referred to as fluid-crystallized theory or Gf-Gc theory. Cattell based his theory on his own factor-analytic work as well as on that of Thurstone, conducted in the 1930s. Cattell believed that fluid intelligence (Gf) included inductive and deductive reasoning abilities that were influenced by biological and neurological factors as well as incidental learning through interaction with the environment. He postulated further that crystallized intelligence (Gc) consisted primarily of acquired knowledge abilities that reflected, to a large extent, the influences of acculturation (Cattell, 1957, 1971).

    In 1965, Cattell's student, John Horn, reanalyzed Cattell's data and expanded the dichotomous Gf-Gc model to include four additional abilities, namely visual perception or processing (Gv), short-term acquisition and retrieval (SAR; now coded Gsm), long-term storage and retrieval (or tertiary storage and retrieval [TSR]; now coded Glr), and speed of processing (Gs). Later, Horn also added auditory processing ability (Ga) to the theoretical model and refined the definitions of Gv, Gs, and Glr (Horn, 1967; Horn & Stankov, 1982). By the early 1990s, Horn had added a factor representing an individual's quickness in reacting (reaction time) and making decisions (decision speed). The decision speed factor was labeled Gt (Horn, 1991). Finally, factors for quantitative ability (Gq) and broad reading/writing ability (Grw) were added to the model, based on the research of Horn (e.g., 1991) and Woodcock (1994), respectively. As a result of the work of Horn and his colleagues, Gf-Gc theory expanded to a 10-factor model (see Figure 1.2) that became known as the Cattell-Horn Gf-Gc theory, or sometimes as contemporary or modern Gf-Gc theory (Horn, 1991; Horn & Blankson, 2005; Horn & Noll, 1997).

    Figure 1.2 Cattell-Horn-Carroll Theory of Cognitive Abilities That Guided Intelligence Test Construction in the First Decade of the New Millennium

    Note: This figure is based on information presented in McGrew (1997) and in Flanagan et al. (2000). Ovals represent broad abilities and rectangles represent narrow abilities. Overall g, general ability, is omitted from this figure intentionally, due to space limitations. Darker rectangles represent those narrow abilities that are most consistently represented on tests of cognitive and academic abilities. See Rapid Reference 1.1 (on page 17) for the definitions of the broad abilities that correspond to the codes in the ovals in this figure. See Appendix A for the definitions and examples of the narrow abilities that correspond to the codes in the rectangles.

    Carroll's Three-Stratum Theory: Second Precursor to CHC Theory

    In his seminal review of the world's literature on cognitive abilities, Carroll (1993) proposed that the structure of cognitive abilities could be understood best via three strata that differ in breadth and generality (see Figure 1.3). The broadest and most general level of ability is represented by stratum III. According to Carroll, stratum III represents a general factor consistent with Spearman's (1927) concept of g and subsumes both broad (stratum II) and narrow (stratum I) abilities. The various broad (stratum II) abilities are denoted with an uppercase G followed by a lowercase letter or letters, much as they had been written by Cattell and Horn (e.g., Gf and Gc). The eight broad abilities included in Carroll's theory subsume approximately 70 narrow (stratum I) abilities (Carroll, 1993; see also Carroll, 1997).

    Figure 1.3 Carroll's (1993) Three-Stratum Theory of Cognitive Abilities

    Note: Figure adapted with permission from D. P. Flanagan, K. S. McGrew, and S. O. Ortiz. Copyright 2000. The Wechsler Intelligence Scales and Gf-Gc theory: A contemporary approach to interpretation.

    Comparison of the Cattell-Horn and Carroll Theories

    Figure 1.4 provides a comparison of the Cattell-Horn Gf-Gc theory and Carroll's three-stratum theory (with only broad abilities shown). These theories are presented together in order to highlight the most salient similarities and differences between them. It is readily evident that the theories have much in common; each posits multiple broad (stratum II) abilities that, for the most part, have similar or identical names and abbreviations. But at least four major structural differences between the two models deserve mention.

    Figure 1.4 A Comparison of Cattell-Horn Gf-Gc Theory and Carroll's Three-Stratum Theory

    Note: Figure adapted with permission from D. P. Flanagan, K. S. McGrew, and S. O. Ortiz. Copyright 2000. The Wechsler Intelligence Scales and Gf-Gc theory: A contemporary approach to interpretation.

    1. Carroll's theory includes a general ability factor (stratum III) whereas the Cattell-Horn theory does not, as Horn and Carroll differed in their beliefs about the existence of this elusive construct (see Schneider & McGrew, 2012, for a more detailed discussion regarding g in this context).

    2. The Cattell-Horn theory includes quantitative reasoning as a distinct broad ability (i.e., Gq) whereas Carroll's theory includes quantitative reasoning as a narrow ability subsumed by Gf.

    3. The Cattell-Horn theory includes a distinct broad reading and writing (Grw) factor. Carroll's theory includes reading and writing as narrow abilities subsumed by Gc.

    4. Carroll's theory includes short-term memory with other memory abilities, such as associative memory, meaningful memory, and free-recall memory, under Gy whereas the Cattell-Horn theory separates short-term memory (Gsm) from associative memory, meaningful memory, and free-recall memory, because the latter abilities are purported to measure long-term retrieval (Glr in Figure 1.2). Notwithstanding these differences, Carroll (1993) concluded that the Cattell-Horn Gf-Gc theory represented the most comprehensive and reasonable approach to understanding the structure of cognitive abilities at that time.

    Decade of CHC Theory (2001–2011)

    In the late 1990s, McGrew (1997) attempted to resolve some of the differences between the Cattell-Horn and Carroll models. On the basis of his research, McGrew proposed an integrated Gf-Gc theory, and he and his colleagues used this model as a framework for interpreting the Wechsler Scales (Flanagan et al., 2000). This integrated theory became known as the CHC theory of cognitive abilities (using the initials of the authors in order of contribution, Cattel, Horn, then Carroll) shortly thereafter (see McGrew, 2005). The Woodcock-Johnson III Normative Update Tests of Cognitive Abilities (WJ III NU COG; Woodcock, McGrew, & Mather, 2001, 2007) was the first cognitive battery to be based on this theory. The components of CHC theory are depicted in Figure 1.2. This figure shows that CHC theory consists of 10 broad cognitive abilities and more than 70 narrow abilities.

    The CHC theory presented in Figure 1.2 omits a g or general ability factor, primarily because the utility of the theory (as it is employed in assessment-related disciplines) is in clarifying individual cognitive and academic strengths and weaknesses that are understood best through the operationalization of broad (stratum II) and narrow (stratum I) abilities (Flanagan et al., 2007). Others, however, continue to believe that g is the most important ability to assess because it predicts the lion's share of the variance in multiple outcomes, both academic and occupational (e.g., Canivez & Watkins, 2010; Glutting, Watkins, & Youngstrom, 2003). Regardless of one's position on the importance of g in understanding various outcomes (particularly academic), there is considerable evidence that both broad and narrow CHC cognitive abilities explain a significant portion of variance in specific academic abilities, over and above the variance accounted for by g (e.g., Floyd, McGrew, & Evans, 2008; McGrew, Flanagan, Keith, & Vanderwood, 1997; Vanderwood, McGrew, Flanagan, & Keith, 2002). The research on the relationship between cognitive abilities and academic skills (or the fourth foundational source of information underlying XBA) is presented in Chapter 2.

    Refinements and Extensions to CHC Theory

    Recently, Schneider and McGrew (2012) reviewed CHC-related research and provided a summary of the CHC abilities (broad and narrow) that currently have the most evidence to support them as viable constructs. In their attempt to provide a CHC overarching framework that incorporates the best-supported cognitive abilities, they articulated a 16-factor model containing over 80 narrow abilities (see Figure 1.5). Because of the greater number of abilities represented by CHC theory now, as compared to past CHC models (e.g., Figure 1.2), the broad abilities in Figure 1.5 have been grouped conceptually into six categories to enhance comprehension, in a manner similar to that suggested by Schneider and McGrew (i.e., Reasoning, Acquired Knowledge, Memory and Efficiency, Sensory, Motor, and Speed and Efficiency). Space limitations preclude a discussion of all the ways in which CHC theory has evolved and the reasons why certain refinements and changes have been made (see Schneider & McGrew for a discussion). However, to assist the reader in transitioning from the 10-factor CHC model (Figure 1.2) to the 16-factor CHC model (Figure 1.5), the next brief explanations are offered.

    Figure 1.5 Current and Expanded Cattell-Horn-Carroll (CHC) Theory of Cognitive Abilities

    Note: This figure is based on information presented in Schneider and McGrew (2012). Ovals represent broad abilities and rectangles represent narrow abilities. Overall g, or general ability, is omitted from this figure intentionally due to space limitations. Darker rectangles represent those narrow abilities that are most consistently represented on tests of cognitive and academic abilities. Conceptual groupings of abilities were suggested by Schneider and McGrew. See Rapid Reference 1.1 for definitions of broad abilities and Appendix A for definitions of narrow abilities.

    Of the 10 CHC factors depicted in Figure 1.2, all were refined by Schneider and McGrew (2012) except Gq. Following is a brief list of the most salient revisions and refinements to CHC theory.

    1. With regard to Gf, Piagetian Reasoning (RP) and Reasoning Speed (RE) were deemphasized (and, therefore, are not included in Figure 1.5). The primary reason is that there is little evidence that they are distinct factors.

    2. Four narrow abilities—Foreign Language Proficiency (KL), Geography Achievement (A5), General Science Information (K1), and Information about Culture (K2)—were moved to a different CHC broad ability, called Domain-Specific Knowledge (Gkn; defined below). Also, within the area of Gc, Foreign Language Aptitude (LA) was dropped, as it is a combination of abilities designed for the purpose of predicting one's success in learning foreign languages and, as such, is not considered a distinct ability. The final refinement to Gc involved dropping the narrow ability of Oral Production and Fluency (OP) because it is difficult to distinguish it from the narrow ability of Communication Ability (CM).

    3. In the area of Grw, Verbal (Printed) Language Comprehension (V) was dropped because it appears to represent a number of different abilities (e.g., reading decoding, reading comprehension, reading speed) and, therefore, is not a distinct ability. Likewise, Cloze Ability (CZ) was dropped from Grw because it is not meaningfully distinct from reading comprehension. Rather, CZ appears to be an alternative method of measuring reading comprehension. As such, current reading comprehension tests that use the cloze format as well as those formally classified as CZ (e.g., WJ III NU ACH Passage Comprehension) are classified as Reading Comprehension (RC) here. The final refinement to Grw involved adding the narrow ability of Writing Speed (WS), as this ability appears to cut across more than one broad ability (see Schneider & McGrew, 2012).

    4. Several refinements were made to the broad memory abilities of Glr and Gsm. Learning Abilities (L1) was dropped from Glr and Gsm. It appears that Carroll conceived of L1 as a superordinate category consisting of different kinds of long-term learning abilities. Schneider and McGrew (2012) referred to this category (i.e., L1) as "Glr-Learning Efficiency," which includes the narrow abilities of Free Recall Memory (M6), Associative Memory (MA), and Meaningful Memory (MM). The remaining Glr narrow abilities are referred to as Retrieval Fluency abilities (see Figure 1.5). In the area of Gsm, the name of the Working Memory (MW) narrow ability was changed to Working Memory Capacity (also MW), as Schneider and McGrew believed the latter term is more descriptive of the types of tasks that are used most frequently to measure MW (e.g., Wechsler Letter-Number Sequencing).

    5. In the area of Gv, one change was made: the narrow ability name Spatial Relations (SR) was changed to Speeded Rotation (also SR) to more accurately describe this ability. Speeded Rotation is the ability to solve problems quickly using mental rotation of simple images (Schneider & McGrew, 2012, p. 129). This ability is similar to visualization because it involves rotating mental images, but it is distinct because it has more to do with the speed at which mental rotation tasks can be completed (Lohman, 1996; Schneider & McGrew, 2012). Also, Speeded Rotation tasks typically involve fairly simple images. It is likely that the majority of tests that were classified as Spatial Relations in the past should have been classified as measures of Vz (Visualization) only (rather than SR, Vz). All tests that were classified as SR (Spatial Relations) were reevaluated according to their task demands and, when appropriate, were reclassified as Vz in this edition. No tests were reclassified as SR (Speeded Rotation).

    6. In the area of Ga, Temporal Tracking (UK) tasks are thought to measure Attentional Control within working memory. As such, UK was dropped as a narrow ability comprising Ga. In addition, six Ga narrow abilities—General Sound Discrimination (U3), Sound-Intensity/Duration Discrimination (U6), Sound-Frequency Discrimination (U5), and Hearing and Speech Threshold (UA, UT, UU)—were considered to represent sensory acuity factors, which fall outside the scope of CHC theory and, therefore, were dropped (Schneider & McGrew, 2012).

    7. In the area of Gs, Reading Speed (RS) and Writing Speed (WS) were added. Although tasks that measure these abilities clearly fall under the broad ability of Grw, they demand quick, accurate performance and are, therefore, also measures of Gs. The narrow Gs ability of Semantic Processing Speed (R4) was moved to Gt. Tests previously classified as R4 were reclassified as Perceptual Speed (P; a narrow Gs ability) in this edition. Also, the narrow ability of Inspection Time (IT) was added to the broad ability of Gt (see Schneider & McGrew, 2012, for details).

    In addition to the within-factor refinements and changes just mentioned, the CHC model has been expanded to include six additional broad abilities: General (Domain-Specific) Knowledge (Gkn), Olfactory Abilities (Go), Tactile Abilities (Gh), Psychomotor Abilities (Gp), Kinesthetic Abilities (Gk), and Psychomotor Speed (Gps) (McGrew, 2005; Schneider & McGrew, 2012). Noteworthy is the fact that the major intelligence tests do not measure most (or any) of these additional factors directly, likely because these abilities (with the possible exception of Gkn) do not contribute much to the prediction of achievement, which is a major purpose of intelligence and cognitive ability tests. However, some of these factors are typically assessed by neuropsychological instruments because these tests are intended, in part, to understand the sensory and motor manifestations of typical and atypical fine- and gross-motor development, traumatic brain injury, and other neurologically based disorders. For example, several tests of the Dean-Woodcock Neuropsychological Battery (Dean & Woodcock, 2003) appear to measure Gh (e.g., Tactile Examination: Finger Identification; Tactile Examination: Object Identification; Tactile Examination: Palm Writing; Tactile Identification: Simultaneous Localization) (Flanagan et al., 2010; see Appendix G for the neuropsychological domain classifications of several ability tests included in this book). Also noteworthy is the fact that there are no commonly used comprehensive intelligence or neuropsychological batteries that measure Go, Gt, or Gps. Rapid Reference 1.1 includes definitions of all CHC broad abilities included in Figure 1.5; Appendix A includes definitions of and task examples for all CHC narrow abilities included in Figure 1.5.

    Caution

    The major intelligence batteries do not directly measure the recently added factors, however, these abilities (with the possible exception of Gkn) do not contribute much to the prediction of academic achievement.

    Don't Forget

    The CHC model has been expanded to include six additional broad abilities: General (Domain-Specific) Knowledge (Gkn), Olfactory Abilities (Go), Tactile Abilities (Gh), Psychomotor Abilities (Gp), Kinesthetic Abilities (Gk), and Psychomotor Speed (Gps).

    Rapid Reference 1.1

    Definitions of 16 Broad CHC Abilities

    In sum, despite the number of refinements, changes, and extensions that have been made to CHC theory recently, approximately 9 broad cognitive abilities and 35–40 narrow abilities are measured consistently by popular cognitive, achievement, and neuropsychological tests. These commonly measured abilities are shaded gray in Figures 1.2 and 1.5. All tests in this edition of Essentials of Cross-Battery Assessment were classified according to the latest iteration of CHC theory (Figure 1.5). The purpose of classifying tests according to the broad and narrow CHC abilities they measure is discussed next.

    Don't Forget

    Approximately 9 broad cognitive abilities and 35–40 narrow abilities are measured consistently by popular cognitive, achievement, and neuropsychological tests.

    CHC Broad (Stratum II) Classifications of Cognitive, Academic, and Neuropsychological Ability Tests

    Based on the results of a series of cross-battery confirmatory factor analysis studies of the major intelligence batteries (see Keith & Reynolds, 2010, 2012; Reynolds, Keith, Flanagan, & Alfonso, 2012) and task analyses performed by a variety of cognitive test experts, Flanagan and colleagues classified all the subtests of the major cognitive and achievement batteries as well as select neuropsychological batteries according to the particular CHC broad abilities they measured (e.g., Flanagan et al., 2010; Flanagan, Ortiz, Alfonso, & Mascolo, 2002, 2006; Flanagan et al., 2007; McGrew, 1997; McGrew & Flanagan, 1998; Reynolds et al., 2012). To date, more than 100 batteries and nearly 800 subtests have been classified according to the CHC broad and narrow abilities they are believed to measure, based in part on the results of these studies and analyses. The CHC classifications of cognitive, achievement, and neuropsychological batteries assist practitioners in identifying measures that assess the various broad and narrow abilities represented in CHC theory.

    Classification of tests at the broad ability level is necessary to improve on the validity of cognitive assessment and interpretation. Specifically, broad ability classifications ensure that the CHC constructs that underlie assessments are clean or pure and minimally affected by construct-irrelevant variance (Messick, 1989, 1995). In other words, knowing what tests measure what abilities enables clinicians to organize tests into construct-relevant clusters—clusters that contain only measures that are relevant to the construct or ability of interest (McGrew & Flanagan, 1998).

    To clarify, construct-irrelevant variance is present when an assessment is too broad, containing excess reliable variance associated with other distinct constructs . . . that affects responses in a manner irrelevant to the interpreted constructs (Messick, 1995, p. 742). For example, the Wechsler Intelligence Scale for Children–Fourth Edition (WISC-IV; Wechsler, 2003) Perceptual Reasoning Index (PRI) has construct-irrelevant variance because, in addition to its two indicators of Gf (i.e., Picture Concepts, Matrix Reasoning), it has one indicator of Gv (i.e., Block Design). Therefore, the PRI is a mixed measure of two, relatively distinct, broad CHC abilities (Gf and Gv); it contains reliable variance (associated with Gv) that is irrelevant to the interpreted construct of Gf. Through CHC-driven confirmatory factor analysis (CFA), Keith, Fine, Taub, Reynolds, and Kranzler (2006) showed that a five-factor model that included Gf and Gv (not PRI) fit the WISC-IV standardization data very well. As a result of their analysis, Flanagan and Kaufman (2004, 2009) provided Gf and Gv composites for the WISC-IV and she and her colleagues use them in the XBA approach because they contain primarily construct relevant variance. The ongoing cross-battery CFAs conducted by Keith and colleagues will continue to lead to improvements in how cognitive subtests are classified, in general, and organized within the context of XBA, in particular (e.g., Reynolds et al., 2012).

    Caution

    Construct-irrelevant variance is present when a composite assesses two or more distinct constructs (i.e., the Perceptual Reasoning Index on the WISC-IV measures both Gf, via Picture Concepts and Matrix Reasoning, and Gv, via Block Design). Construct-irrelevant variance can occur at the subtest and composite levels, leading to psychologically ambiguous scores that confound interpretation.

    Construct-irrelevant variance can also operate at the subtest (as opposed to composite) level. For example, a Verbal Analogies test (e.g., Sun is to day as moon is to ___.) measures both Gc and Gf. That is, in theory-driven factor-analytic studies, Verbal Analogies tests have significant loadings on both the Gc and Gf factors (e.g., Woodcock, 1990). Therefore, these tests are considered factorially complex—a condition that complicates interpretation (e.g., Is poor performance due to low vocabulary knowledge [Gc] or to poor reasoning ability [Gf], or both?).

    According to Guilford (1954), Any test that measures more than one common factor to a substantial degree [e.g., Verbal Analogies] yields scores that are psychologically ambiguous and very difficult to interpret. (p. 356; cited in Briggs & Cheek, 1986). Therefore, cross-battery assessments typically are designed using only empirically strong or moderate (but not factorially complex or mixed) measures of CHC abilities (Flanagan et al., 2007; McGrew & Flanagan, 1998).

    CHC Narrow (Stratum I) Classifications of Cognitive, Academic, and Neuropsychological Ability Tests

    Narrow ability classifications were originally reported in McGrew (1997), then reported in McGrew and Flanagan (1998) and Flanagan et al. (2000) following minor modifications. Flanagan and her colleagues continued to gather content validity data on cognitive ability tests and expanded their analyses to include tests of academic achievement (Flanagan et al., 2002, 2006) and more recently tests of neuropsychological processes (e.g., Flanagan, Alfonso, Mascolo, & Hale, 2011; Flanagan et al., 2010). For this edition of the book, the three authors and one of their colleagues, Dr. Agnieszka M. Dynda, classified hundreds of subtests according to the broad and narrow CHC abilities they measured. Inter-rater reliability estimates were calculated and disagreements were reviewed by all four raters, and inconsistencies ultimately resolved. The classification process along with results of inter-rater reliability analyses are provided in Appendix L.

    Classifications of cognitive ability tests according to content, format, and task demand at the narrow

    Enjoying the preview?
    Page 1 of 1