Nothing Special   »   [go: up one dir, main page]

Wisconsin Card Sorting Test Embedded Validity Indicators Developed For Adults Can Be Extended To Children

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/311064837

Wisconsin Card Sorting Test embedded validity indicators developed for


adults can be extended to children

Article  in  Child Neuropsychology · November 2016


DOI: 10.1080/09297049.2016.1259402

CITATIONS READS

15 256

5 authors, including:

Jonathan D Lichtenstein Laszlo A Erdodi


Dartmouth–Hitchcock Medical Center University of Windsor
23 PUBLICATIONS   229 CITATIONS    58 PUBLICATIONS   388 CITATIONS   

SEE PROFILE SEE PROFILE

Jaspreet Rai Anya Mazur-Mosiewicz


University of Windsor Oklahoma State University - Tulsa
9 PUBLICATIONS   68 CITATIONS    32 PUBLICATIONS   73 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Low scores on the Grooved Pegboard Test are associated with invalid responding and psychiatric symptoms View project

Multivariate Base Rates for Clinical Interpretation of Tests View project

All content following this page was uploaded by Laszlo A Erdodi on 11 October 2017.

The user has requested enhancement of the downloaded file.


Child Neuropsychology
A Journal on Normal and Abnormal Development in Childhood and
Adolescence

ISSN: 0929-7049 (Print) 1744-4136 (Online) Journal homepage: http://www.tandfonline.com/loi/ncny20

Wisconsin Card Sorting Test embedded validity


indicators developed for adults can be extended to
children

Jonathan D. Lichtenstein, Laszlo A. Erdodi, Jaspreet K. Rai, Anya Mazur-


Mosiewicz & Lloyd Flaro

To cite this article: Jonathan D. Lichtenstein, Laszlo A. Erdodi, Jaspreet K. Rai, Anya
Mazur-Mosiewicz & Lloyd Flaro (2016): Wisconsin Card Sorting Test embedded validity
indicators developed for adults can be extended to children, Child Neuropsychology, DOI:
10.1080/09297049.2016.1259402

To link to this article: http://dx.doi.org/10.1080/09297049.2016.1259402

Published online: 28 Nov 2016.

Submit your article to this journal

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=ncny20

Download by: [University of Windsor], [Dr Laszlo A. Erdodi] Date: 28 November 2016, At: 09:57
CHILD NEUROPSYCHOLOGY, 2016
http://dx.doi.org/10.1080/09297049.2016.1259402

Wisconsin Card Sorting Test embedded validity indicators


developed for adults can be extended to children
Jonathan D. Lichtensteina, Laszlo A. Erdodib, Jaspreet K. Raib, Anya Mazur-Mosiewiczc,d
and Lloyd Flaroe
a
Department of Psychiatry, Neuropsychology Services, Geisel School of Medicine at Dartmouth, Lebanon, NH,
USA; bDepartment of Psychology, Neuropsychology Track, University of Windsor, ON, USA; cDepartment of
Psychology, Chicago School of Professional Psychology, IL, USA; dDepartment of Psychiatry and Behavioral
Science, Oklahoma State University, Tulsa, OK, USA; ePrivate practice, Edmonton, AB, USA

ABSTRACT ARTICLE HISTORY


Past studies have examined the ability of the Wisconsin Card Received 24 June 2016
Sorting Test (WCST) to discriminate valid from invalid performance Accepted 6 November 2016
in adults using both individual embedded validity indicators (EVIs) Published online
28 November 2016
and multivariate approaches. This study is designed to investigate
whether the two most stable of these indicators—failures to main- KEYWORDS
tain set (FMS) and the logistical regression equation S-BLRE—can Performance validity testing;
be extended to pediatric populations. Embedded validity
The classification accuracy for FMS and S-BLRE was examined in a indicators; WCST; Effort
mixed clinical sample of 226 children aged 7 to 17 years (64.6% testing; Pediatric
neuropsychological
male, MAge = 13.6 years) against a combination of established assessment; Pediatric PVTs
performance validity tests (PVTs).
The results show that at adult cutoffs, FMS and S-BLRE produce an
unacceptably high failure rate (33.2% and 45.6%) and low specifi-
city (.55–.72), but an upward adjustment in cutoffs significantly
improves classification accuracy. Defining Pass as <2 and Fail as ≥4
on FMS results in consistently good specificity (.89–.92) but low
and variable sensitivity (.00–.33). Similarly, cutting the S-BLRE dis-
tribution at 3.68 produces good specificity (.90–.92) but variable
sensitivity (.06–.38). Passing or failing FMS or S-BLRE is unrelated to
age, gender and IQ.
The data from this study suggest that in a pediatric sample, adjusted
cutoffs on the FMS and S-BLRE ensure good specificity, but with low
or variable sensitivity. Thus, they should not be used in isolation to
determine the credibility of a response set. At the same time, they
can make valuable contributions to pediatric neuropsychology by
providing empirically-supported, expedient and cost-effective indica-
tors to enhance performance validity assessment.

In the context of neuropsychological evaluation, establishing the validity of testing data


is a critical issue. In order to render accurate diagnoses and appropriate treatment
recommendations, clinicians must first be confident that the subject’s performance was
indeed valid. The neuropsychology research literature over the last two decades has
reflected this concern, as evidenced by the rapid expansion of studies investigating tools

CONTACT Jonathan D. Lichtenstein jonathan.d.lichtenstein@dartmouth.edu Department of Psychiatry,


Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA
© 2016 Informa UK Limited, trading as Taylor & Francis Group
2 J. D. LICHTENSTEIN ET AL.

for assessing performance validity, including stand-alone performance validity tests


(PVTs) and embedded validity indicators (EVIs).
Although much of the performance validity literature has focused on adult popula-
tions, the utility and efficacy of stand-alone PVTs have also been investigated in
pediatric samples, including research involving the Test of Memory Malingering
(TOMM, Tombaugh, 1996; e.g., DeRight & Carone, 2015; Ploetz, Mazur-Mosiewicz,
Kirkwood, Sherman, & Brooks, 2016), the Word Memory Test (WMT; Green, 2003;
e.g., Green & Flaro, 2003; Gunn, Batchelor, & Jones, 2010), the Medical Symptom
Validity Test (MSVT; Green, 2004; e.g., Carone, 2008; Kirkwood & Kirk, 2010), the
Nonverbal Medical Symptom Validity Test (NV-MSVT; Green, 2008; e.g., Harrison,
Flaro, & Armstrong, 2015), the 21-Item Test (Iverson, 1998; e.g., Martin, Haut,
Stainbrook, & Franzen, 1995), and the Victoria Symptom Validity Test (VSVT; Slick,
Hopp, Strauss, & Thompson, 1997; e.g. Brooks, 2012), to name a few.
EVIs hold a considerable advantage over stand-alone PVTs. First, by utilizing data
that are already collected for clinical purposes, EVIs are more cost effective, expedient,
and resistant to coaching than stand-alone tests (Miele, Gunner, Lynch, & McCaffrey,
2012). Second, EVIs allow for the continuous monitoring of cognitive effort in neu-
ropsychological evaluation, a practice that is consistent with the highest forensic
standards (Boone, 2009; Bush et al., 2005; Heilbronner et al., 2009). Third, because
EVIs were originally conceived as dynamic models (Babikian, Boone, Lu, & Arnold,
2006; Boone, 2013; Larrabee, 2008; Nelson, Sweet, Berry, Bryant, & Granacher, 2007)
they can be better able—as compared to stand-alone PVTs—to respond to the unique
challenges of pediatric neuropsychology and provide the flexibility to accommodate
population-specific idiosyncrasies, such as developmental trajectories, higher volatility
of the target constructs, and increased reactivity to variables that are extraneous to the
assessment process. Although the signal detection performance of individual EVIs is
typically inferior to that of stand-alone PVTs, there is a growing body of evidence
suggesting that the aggregation of multiple EVIs into a single composite produces
results similar to those of stand-alone PVTs (Erdodi et al., 2016; Erdodi, Roth,
Kirsch, Lajiness-O’Neill, & Medoff, 2014; Larrabee, 2007).
While research on EVIs in the pediatric literature has lagged behind that of its adult
counterpart, a number of them have been studied. Much of the work on EVIs in
children has stemmed from those measures identified in adult populations, such as
Reliable Digit Span (Blaskewitz, Merten, & Kathmann, 2008; Harrison & Armstrong,
2014; Kirkwood, Hargrave, & Kirk, 2011; Welsh, Bender, Whitman, Vasserman, &
MacAllister, 2012), in which cutoffs were adjusted to achieve acceptable sensitivity
and specificity as validity indicators. More recently, the California Verbal Learning
Test - Children’s Version (CVLT-C) recognition discriminability z−score has become a
focus of pediatric EVI investigation (Baker, Connery, Kirk, & Kirkwood, 2014; Brooks
& Ploetz, 2015), and newly-developed pediatric neuropsychological tests are beginning
to include EVIs built right in (Sherman & Brooks, 2015). Despite the high degree of
budding research in this area, the findings have been quite variable and pediatric EVIs
continue to be an area in need of further study.
Originally developed by Grant and Berg (1948), the Wisconsin Card Sorting Test
(WCST; Heaton, Chelune, Talley, Kay, & Curtis, 1993) is a well-validated measure of
executive functions for both adults and children. Furthermore, it is also one of the most
CHILD NEUROPSYCHOLOGY 3

widely used measures in all of clinical neuropsychology (Rabin, Paolillo, & Barr, 2016;
Retzlaff, Butler, & Vanderploeg, 1992). The WCST generates several scores, including
categories completed, errors, perseverative errors, perseverative responses, conceptual-
level responses, and failures to maintain set (FMS). Surprisingly, FMS is rarely reported
in studies involving the WCST and children (Romine et al., 2004). This may be due to
the fact that FMS responses are challenging to explain clinically, as they may stem from
difficulty with skill areas ranging from attention and working memory to response
inhibition and self-monitoring capabilities. With such a variety of possible underlying
causes, the clinical meaning of FMS is not well understood.
In the adult literature, certain WCST variables—alone or in combination—have
demonstrated a potential to discriminate between valid and invalid response sets. The
empirical foundation of such investigations is the early observation that FMS are
relatively rare—not only in healthy controls but also in individuals with brain injuries
(Heaton et al., 1993). This assumption was reaffirmed by the results of a more recent
study that found FMS to be the least sensitive to executive deficits of the nine WCST
variables examined in a sample of 44 patients in the acute stage of recovery following a
unilateral stroke (Jodzio & Biechowska, 2010).
Building on this knowledge, Suhr and Boyer (1999) found a strong negative relation-
ship between performance validity and FMS. They examined a sample of experimental
malingerers (MFMS = 2.3), normally responding undergraduates (MFMS = 0.3), credible
traumatic brain injury (TBI) patients (MFMS = 0.2) and a TBI sample with probable
malingered neurocognitive dysfunction (MND; Slick, Sherman, and Iverson, 1999;
MFMS = 1.7). Similarly, King, Sweet, Sherer, Curtiss, and Vanderploeg (2002) reported
a roughly twofold increase in mean FMS (2.3) in adults with TBI seeking compensation
with insufficient effort compared to a sample of patients with documented moderate/
severe TBI who passed PVTs (MFMS = 1.1) or who were assessed during acute inpatient
rehabilitation (MFMS = 0.9).
Larrabee (2003) was the first to turn these group-level differences into an explicit
validity indicator. He demonstrated that an FMS ≥2 produced a good combination of
sensitivity (.48) and specificity (.87) in differentiating 31 credible patients with moder-
ate/severe TBI from 26 subjects who met the MND criteria. When used in combination
with other PVTs, this cutoff consistently produced perfect specificity (1.00) when
comparing a sample with probable MND to patients with mixed neurological etiology
and those with psychiatric disorders.
However, a subsequent investigation by Greve, Heinly, Bianchini, and Love (2009)
based on 373 TBI and 766 general clinical patients designed to replicate these findings
produced mixed results. Although credible patients with mild TBI produced lower
mean FMS (0.8) than those with MND (1.5), credible patients with severe TBI also
produced elevated mean FMS (1.3). The ≥2 cutoff resulted in a .42 sensitivity and .82
specificity in the detection of MND within the mild TBI group, while misclassifying
27% of the patients with severe TBI and 28% with psychiatric disorders.
Suhr and Boyer (1999) also combined FMS with categories completed in a logistic
regression equation (S-BLRE) that used the number of categories completed and the
FMS as predictors, with the probability of invalid responding as the outcome variable.
The new indicator produced excellent sensitivity (.82) and good specificity (.93) in the
patient sample. The authors attributed this remarkable classification accuracy to their
4 J. D. LICHTENSTEIN ET AL.

selection of WCST variables that are minimally correlated, thus contributing unique
information to the predictive power of the overall model.
As is often the case with simulation studies (Larrabee, 2007), the S-BLRE fell short of
expectations when applied to real-life patients. Greve and Bianchini (2002) reported
variable (.53–1.00) specificity values in a wide range of clinical samples. Even after
adjusting the cutoff from >0.00 (.50 nominal probability of invalid performance) to
≥3.16 (.90 nominal probability of invalid performance), the S-BLRE produced specificity
values between .75 and .88 (King et al., 2002). A comparable upward adjustment of
cutoffs stabilized the classification accuracy in the study by Greve, Bianchini, Mathias,
Houston, and Crouch (2002): S-BLRE >1.90 produced .19–.47 sensitivity at .89 specifi-
city. These results were replicated in larger-scale investigations: S-BLRE ≥3.0 was asso-
ciated with .32 sensitivity at .89 specificity (Greve et al., 2009).
Less data are available on life-span effects on the classification accuracy of WCST-
based EVIs. Gligorović and Buha (2013) reported a mean FMS of 1.6 (range: 0–4) in a
sample of 95 children between 10 and 14 years of age with mild intellectual disability
(MFSIQ = 60.4, range: 50–70). At the opposite end of the age spectrum, Ashendorf,
O’Bryant, and McCaffrey (2003) found that an S-BLRE ≥3.16 identified 20% of 197
community-dwelling healthy older adults (MAge = 64.6, range: 55–75) as invalid, even
though they all passed a stand-alone PVT.
Despite several notes of caution, the overall empirical evidence suggests that WCST-
based EVIs have the potential to function as PVTs. However, little is known about the
relationship between age-related changes and the classification accuracy of WCST-
based EVIs. To date, there are no published studies examining FMS as a validity
indicator in children and no data on S-BLRE in pediatric samples at all. The present
study is designed to provide the first systematic investigation of these WCST indicators
in the context of pediatric performance validity assessment.
It was hypothesized that the original adult cutoffs on FMS and S-BLRE would
result in increased false positive rates in children. However, it was also predicted that
with the proper adjustment, it would be possible to optimize cutoffs for pediatric
populations. The final hypothesis contrasted the two EVIs themselves: it was antici-
pated that S-BLRE would outperform FMS based on the FMS’ lack of stability as an
indicator in cognitive functioning in the literature, as well as anecdotal clinical
evidence that FMS errors may be more common in children (with and without
neurological disorders).

Method
Participants
Archival data were collected from a consecutive sequence of 226 children and adoles-
cents clinically referred for comprehensive neuropsychological assessment to a pediatric
neuropsychologist in private practice who met a specific set of inclusion criteria.
Referral sources included physicians, social services, and school boards. To be included
in the study, children had to meet the following criteria: age <18 years, FSIQ >70,
reading level of grade 3 or higher, data on a complete WCST administration and at least
two reference PVTs.
CHILD NEUROPSYCHOLOGY 5

The majority of the sample is Caucasian (86.7%) and male (64.6%), with a mean age
of 13.6 years (SD = 2.6, range: 7–17). Mean FSIQ was 89.8 (SD = 11.7, range: 70–126).
The participants fell into a wide range of diagnostic categories, including personality
disorder (13.6%), attention deficit hyperactivity disorder (ADHD, 11.2%), fetal alcohol
spectrum disorder (10.7%), conduct disorder (9.3%), learning disability (7.5%), autism
spectrum disorder (ASD, 7.5%), TBI (4.2%), language impairment (3.3%), post-trau-
matic stress disorder (PTSD, 2.8%), bipolar disorder (2.8%), oppositional defiant dis-
order (ODD, 2.3%), schizophrenia (2.3%), reactive attachment disorder (1.9%),
nonverbal learning disorder (1.9%), and developmental coordination disorder (1.9%).
Half of the participants (53.5%) had at least one comorbid condition, and only 8
participants (3.7%) had no diagnosis.

Materials
All participants completed a comprehensive battery of neuropsychological tests that
include commonly-used measures of general intelligence, memory, language, visuospa-
tial abilities, attention, executive function, and motor skills. Performance validity was
assessed using a combination of stand-alone (WMT, MSVT, NV-MSVT) and
embedded (the Omissions [OMI] and Perseverations [PER] scales of the Conners’
Continuous Performance Test – Second edition [CPT-II]) PVTs. Finally, these stand-
alone and embedded PVTs were combined into a validity composite (EI-5), which in
turn was used as an additional criterion PVT.

The Erdodi Index – Five Variable Model (EI-5)


There is a growing consensus in the PVT literature in favor of utilizing multiple indepen-
dent validity indicators administered throughout the assessment (Boone, 2009; Bush et al.,
2005; Heilbronner et al., 2009). The ongoing monitoring of test-taking effort has several
advantages. First, it reveals a chronological pattern of an individual’s cognitive effort as the
testing process unfolds (Boone, 2009; Larrabee, 2003). Second, it mitigates potential
limitations of relying on a single PVT (e.g., Batt, Shores, & Chekaluk, 2008; Greve, Ord,
Curtis, Bianchini, & Brennan, 2008). Finally, a multivariate model of validity assessment
based on independent PVTs provides non-redundant information on the credibility of the
neurocognitive profile (Nelson et al., 2007).
Although PVTs typically produce a dichotomous classification (Pass/Fail), test-tak-
ing effort is better conceptualized as a dimensional construct ranging from markedly
poor to full effort. In an attempt to capture this continuum, a composite measure (EI-5)
was developed by aggregating five empirically-supported PVTs: the WMT (Green,
2003), the MSVT (Green, 2004), the NV-MSVT (Green, 2008), OMICPT-II and
PERCPT-II (Erdodi, Lichtenstein, Rai, & Flaro, 2016). Within each component of the
EI-5, performance reflecting a clear Pass received a score of 0, while performance on the
failing side of the most conservative cutoff was assigned a score of 3, with intermediate
levels of failure in between. In other words, the constituent PVTs were rescaled to
capture the validity gradient underlying performance on any cognitive test (Erdodi
et al., 2014; Erdodi, Tyson, et al., 2016). Table 1 provides a key that illustrates how EI-5
scores were assigned for each PVT as well as the associated base rates of failure (BRFail).
6 J. D. LICHTENSTEIN ET AL.

Table 1. The Components of the Erdodi Index (EI-5), Base Rates for failing Each Cutoff and
Cumulative Failure Rates.
EI-5 Value
Components of the EI-5 0 1 2 3
WMT Failures* 0 1 2 3
Base Rate 93.2 1.6 2.6 2.6
MSVT Failures** 0 1 2 3
Base Rate 96.1 1.0 1.5 1.5
NV-MSVT Failures*** 0 1 2 -
Base Rate 92.3 2.1 5.7 0.0
CPT-II OMI <75 75.0–79.9 80.0–99.9 ≥100
Base Rate 94.3 1.0 3.8 1.0
CPT-II PER <75 75.0–79.9 80.0–99.9 ≥100
Base Rate 94.7 1.4 2.9 1.0
Note. *Immediate Recall, Delayed Recall, Consistency ≤ 82.5%; **Immediate Recall, Delayed Recall, Consistency ≤ 85%;
***(IR+DR+CNS+DRA+DRV+PA)/6 ≤ 90% and (DR+CNS+DRA+DRV)/4 > 88%. CPT-II = Conners’ Continuous
Performance Test – Second Edition;
MSVT = Medical Symptom Validity Test; NV-MSVT = Nonverbal MSVT; OMI = CPT-II Omissions t-score; PER = CPT-II
Perseverations t-score; WMT = Green’s Word Memory Test.

Table 2. Frequency Distribution and Descriptive Labels for EI-5A and EI-5B.
EI-5A EI-5B
EI-5 f % Cumulative % f % Cumulative % Classification
0 110 74.8 74.8 189 83.6 83.6 PASS
1 5 3.4 78.2 5 2.2 85.8 Pass
2 16 10.9 89.1 16 7.1 92.9 Borderline
3 8 5.4 94.6 8 3.5 96.5 Fail
4 5 3.4 98.0 5 2.2 98.7 FAIL
5 3 2.0 100.0 3 1.3 100.0 FAIL
Note. EI-5A = Erdodi Index – Five Variable Model, Version A; EI-5B = Erdodi Index – Five Variable Model, Version B.
Participants with missing data on one or more of the EI-5 components were excluded for the EI-5A. Missing data on
the EI-5 components were assigned a score of 0 (i.e., Pass) for the EI-5B.

The value of the EI-5 was obtained by summing the rescaled score on each of the five
component PVTs. For the purpose of this study, two versions of the EI-5 were
computed: EI-5A and EI-5B (Table 2). Participants who were missing a score on any
of the five components were excluded from the EI-5A, effectively reducing the sample
size. In the case of EI-5B, however, all participants missing scores on one or more EI-5
components automatically received a score of 0 (i.e., Pass).
A value of 0 on the EI-5 is considered an unequivocal PASS as it indicates that the
participant passed all five PVTs at the most liberal cutoff, and as such, consistently
demonstrated good effort. An EI-5 value of 1 is also considered a Pass because it reflects
failure on only one PVT at the most liberal cutoff, which is relatively common even in
otherwise credible subjects (Boone, 2013). An EI-5 value of 2 may indicate either failure
on two PVTs at the most liberal cutoff or failure on one PVT at a more conservative
cutoff. Thus, this is classified as Borderline.
EI-5 scores ≥3 represent failure on at least three of the five PVTs at the most liberal
cutoff, or failure on one or more PVTs at more conservative cutoffs. Thus, this range of
performance is considered a Fail, and is used as the demarcation line for invalid
responding. To maximize the purity of the reference groups, participants with a
Borderline performance on the EI-5 score were excluded from the analyses, following
CHILD NEUROPSYCHOLOGY 7

methodological guidelines for calibrating new PVTs (Axelrod, Meyers, & Davis, 2014;
Greve & Bianchini, 2004).

Procedure
At the time of each assessment, consent was obtained for each child’s demographic,
diagnostic, and neuropsychological test data to be used anonymously for future archival
research. Tests were administered by a staff psychometrist following standard instruc-
tions. All identifying information was removed prior to releasing the data for research
purposes. The study was approved by the institutional research ethics board.

Data Analysis
Descriptive statistics (frequency, percentage, and cumulative percentage; mean, SD,
skew and range) were computed for variables of interest. Point-biserial correlations
were calculated between dichotomized PVTs (Pass/Fail) and demographic variables.
Finally, sensitivity and specificity were calculated using standard formulas. Sensitivity
(or the true positive [TP] rate) is the ratio of TP and the sum of TP and false negatives
(FNs), and represents the probability that a measure will correctly detect an invalid
profile. Specificity (or the true negative [TN] rate), is the ratio of TN and the sum of TN
and false positives (FPs), and represents the probability that a measure will correctly
detect a valid profile.

Results
Of the established PVTs, the NV-MSVT classified the largest proportion of the sample as
invalid (7.7%), while the MSVT produced the smallest BRFail (3.9%). The EI-5A produced
the highest BRFail (12.2%) of the reference PVTs, while the EI-5B produced an intermediate
BRFail (7.6%). The mean EI-5A is 0.65 (SD = 1.26, median = 0, range: 0–5) and the mean EI-
5B is 0.42 (SD = 1.07, median = 0; range: 0–5). Both distributions have a strong positive
skew (1.88 for EI-5A and 2.57 for EI-5B) and kurtosis (2.57 for EI-5A and 6.14 for EI-5B).
The mean FMS is 1.25 (SD = 1.38, median = 1, range: 0–7). The distribution is
positively skewed (1.37) with a positive kurtosis (2.01), showing an inverted J-shape
with a rapidly decreasing frequency of extreme scores. The mean S-BLRE is 0.55
(SD = 1.95, median = −0.33, range: −1.34 to 7.23). The distribution is positively skewed
(1.13) with kurtosis within ±1.0. A visual inspection revealed an inverted J-shaped
distribution similar to that of the FMS. The most frequent scores are in the negative
(i.e., unequivocally valid) range, with fewer and fewer observed scores at the far right
−hand side of the scale (Table 3).
At adult cutoffs (≥2), FMS has a BRFail (33.2%) nearly three times as high as the next
highest. Consequently, it produces an unacceptably low specificity (.67–.72) against all
reference PVTs. Raising the cutoff to ≥3 improves specificity (.80–.83), but still falls
short of the minimum threshold. Extending the passing range to FMS <3 results in
small gains in specificity (.83–.85), with variable sensitivity (.07–.38). Defining a Pass as
<2 and a Fail as ≥4 produces good specificity (.89–.92), but low and fluctuating
8 J. D. LICHTENSTEIN ET AL.

Table 3. Frequency Distribution, Percentages, Cumulative Percentages, and


Classification Labels for Failure to Maintain Set (FMS).
FMS f % Cumulative % Classification
0 83 36.7 36.7 PASS
1 68 30.1 66.8 Pass
2 39 17.3 84.1 Borderline
3 20 8.8 92.9 Borderline
4 8 3.5 96.5 Borderline
5 5 2.2 98.7 Fail
6 2 0.9 99.6 FAIL
7 1 0.4 100.0 FAIL

Table 4. Frequency Distribution, Percentages, Cumulative Percentages, and


Classification Labels for S-BLRE.
S-BLRE p (invalid) f % Cumulative % Classification
≤0.00 .50 123 54.4 54.4 PASS
0.42 .60 6 2.6 57.1 Pass
1.17 .70 32 14.2 71.2 Pass
1.43 .75 4 1.8 73.0 Pass
1.69 .80 10 4.4 77.4 Pass
2.41 .85 8 3.5 81.0 Borderline
3.16 .90 15 6.6 87.6 Fail
3.68 .95 10 4.3 92.0 Fail
4.43 .96–.98 7 3.1 95.1 FAIL
4.69 .99 3 1.3 96.5 FAIL
Note. S-BLRE = Logistic regression equation developed by Suhr and Boyer (1999).

sensitivity (.00–.33). Further increasing the cutoff to ≥5 produces significant gains in


specificity (.95–.99), without a notable loss in sensitivity (Table 5).
The original cutoff on the S-BLRE (>.00) identifies nearly half of the sample as invalid,
with unacceptably low specificity (.55–.64) against all reference PVTs. A more stringent
cutoff (≥3.16) achieves the minimum level of specificity (.87–.89), but results in low and
variable sensitivity (.06–.38). Increasing the threshold to <2.41 for Pass and ≥3.68 for
Fail achieves good specificity (.89–.91), but again results in low and variable sensitivity
(.08–.50). Redefining a Pass as <3.16 and a Fail as ≥3.68 leaves specificity essentially
unchanged (.90–.92), but lowers sensitivity (.07–.38). Cutting the distribution at 3.68
produces essentially the same results (Tables 4 and 6).
Failing the WMT has a significant negative correlation with FSIQ. Failing the
PERCPT-II and S-BLRE is significantly and negatively correlated with age. Finally, failing
the FMS has a significant negative correlation with age (Table 7). Although statistically
significant, even the strongest correlation coefficient between PVT failure and demo-
graphic variables accounts for <4% of the variance.

Discussion
The present study examines the classification accuracy of two WCST-based EVIs
originally validated in adults in a pediatric sample. The results converge in a number
of main findings. First, the original adult cutoffs produce unacceptably high BRFail
(33.2% and 45.6%) and low specificity (.67–.72 and .55–.64) against reference PVTs for
FMS and S-BLRE, respectively. Second, applying more stringent cutoffs synchronizes the
CHILD NEUROPSYCHOLOGY 9

Table 5. Sensitivity, Specificity, and Base Rates of Failure of Various FMS Cutoffs against Reference
PVTs.
WMT MSVT NV-MSVT OMI PER EI-5A EI-5B
%ADM 84.5 90.7 85.8 92.5 92.5 65.0 100.0
Cutoff Std Std Std >80 >70 ≥3 ≥3
BRFail 6.8 3.9 7.7 4.8 8.1 12.2 7.6
Pass <2 33.2 Sens .62 .50 .40 .60 .24 .44 .44
Fail ≥2 Spec .69 .68 .68 .69 .67 .72 .69
Pass <2 19.3 Sens .38 .43 .10 .43 .19 .31 .31
Fail ≥3 Spec .80 .81 .81 .83 .82 .83 .82
Pass <3 15.9 Sens .23 .38 .07 .30 .18 .25 .25
Fail ≥3 Spec .83 .84 .84 .85 .85 .85 .85
Pass <2 9.6 Sens .17 .33 0.0 .20 .07 .10 .10
Fail ≥4 Spec .89 .92 .90 .91 .91 .92 .91
Pass <2 5.0 Sens .17 .33 0.0 .20 .07 .10 .10
Fail ≥5 Spec .95 .97 .95 .96 .96 .99 .96
Note. %ADM = percentage of sample to which the test was administered; CPT-II = Conners’ Continuous Performance
Test – Second Edition; EI-5A = Erdodi Index – Five Variable Model, Version A; EI-5B = Erdodi Index – Five Variable
Model, Version B; MSVT = Medical Symptom Validity Test; NV-MSVT = Nonverbal Medical Symptom Validity Test; OMI
= CPT-II Omissions t-score; PER = CPT-II Perseverations t-score; Sens = sensitivity; Spec = specificity; Std = standard
cutoffs as per manual; WMT = Green’s Word Memory Test.

Table 6. Sensitivity, Specificity, and Base Rates of Failure for Various S-BLRE Cutoffs against Reference
PVTs.
WMT MSVT NV-MSVT OMI PER EI-5A EI-5B
%ADM 84.5 90.7 85.8 92.5 92.5 65.0 100.0
Cutoff Std Std Std >80 >70 ≥3 ≥3
BRFail 6.8 3.9 7.7 4.8 8.1 12.2 7.6
Pass <0.00 45.6 Sens .85 .63 .60 .80 .65 .63 .63
Fail ≥0.00 Spec .60 .55 .58 .57 .57 .64 .57
Pass <3.16 12.8 Sens .23 .38 .13 .10 .06 .06 .06
Fail ≥3.16 Spec .87 .88 .88 .88 .88 .89 .88
Pass <2.41 10.4 Sens .30 .50 .08 .14 .08 .09 .09
Fail ≥3.68 Spec .89 .91 .90 .90 .90 .91 .90
Pass <3.16 9.6 Sens .23 .38 .07 .10 .06 .06 .06
Fail ≥3.68 Spec .90 .92 .91 .91 .90 .92 .91
Pass <3.68 9.3 Sens .23 .38 .07 .10 .06 .06 .06
Fail ≥3.68 Spec .90 .92 .91 .91 .91 .92 .91
Note. %ADM = percentage of sample to which the test was administered; CPT-II = Conners’ Continuous Performance
Test – Second Edition; EI-5A = Erdodi Index – Five Variable Model, Version A; EI-5B = Erdodi Index – Five Variable
Model, Version B; MSVT = Medical Symptom Validity Test; NV-MSVT = Nonverbal Medical Symptom Validity Test; OMI
= CPT-II Omissions t-score; PER = CPT-II Perseverations t-score; S-BLRE = logistic regression equation developed by
Suhr and Boyer (1999); Sens = sensitivity; Spec = specificity; Std = standard cutoffs as per manual; WMT = Green’s
Word Memory Test.

BRFail (9.3–9.6%) with that observed on established PVTs. The optimal Pass/Fail cutoffs
(FMS <2/≥4 and S-BLRE <3.68/≥3.68) produce consistently good specificity (.89–.92) but
variable sensitivity (.07–.38). There is a residual “indeterminate” range on the FMS (2–
3) that cannot be confidently classified as either valid or invalid.
These results are consistent with the first hypothesis that at the original adult cutoffs
the FMS and S-BLRE would produce unacceptably high false positive rates in children.
Therefore, they should not be applied to pediatric samples. Given the combination of
the multifactorial nature of the task, the strong age effects on overall WCST perfor-
mance and the uncorrected raw scores used as PVTs, it is not surprising that ability and
effort are harder to differentiate in children compared to adults.
10 J. D. LICHTENSTEIN ET AL.

Table 7. Pearson Correlation Coefficients between the Reference PVTs, FMS, S-BLRE, and Age, Gender,
and FSIQ.
Cutoff Age Gender FSIQ
WMT Std −.04 −.12 −.18*
MSVT Std .05 .01 −.11
NV-MSVT Std −.08 −.10 −.13
CPT-II OMI >80 −.06 .02 −.01
CPT-II PER >70 −.18** −.04 −.06
.
EI-5A ≥3.00 −.03 −.06 −.12
EI-5B ≥3.00 .00 −.03 −.10
FMS ≥4.00 −.12 −.18* −.06
S-BLRE ≥3.68 −.19** −.13 −.08
Note. *p < .05; **p < .01. CPT-II = Conners’ Continuous Performance Test – Second Edition; EI-5A = Erdodi Index – Five
Variable Model, Version A; EI-5B = Erdodi Index – Five Variable Model, Version B; FMS = failure to maintain set; MSVT
= Medical Symptom Validity Test; NV-MSVT = Nonverbal Medical Symptom Validity Test; OMI = CPT-II Omissions
t-score; PER = CPT-II Perseverations t-score; S-BLRE = logistic regression equation developed by Suhr and Boyer (1999);
Std = standard cutoffs as per manual; WMT = Green’s Word Memory Test.

The second hypothesis is also supported by the data. After a substantial upward
adjustment of the cutoffs, both EVIs achieve adequate specificity. Nevertheless, sensi-
tivity falls short of values reported in the adult literature (.42 in Greve et al., 2009; .48 in
Larrabee, 2003; .82 in Suhr & Boyer, 1999). This may be an artifact of low overall BRFail
in children (Kirkwood, 2015) which constrains the instruments’ ability to detect invalid
responding. Alternatively, low and variable sensitivity could reflect a more complex
relationship between ability and effort in pediatric populations, making it more chal-
lenging to differentiate between the two. At the same time, fluctuations in both
specificity (.75–.88 in King et al., 2002) and sensitivity (.19–.47 in Greve et al., 2002)
have also been documented in studies on adults. Therefore, this trend may not be
unique to pediatric samples, but may instead be inherent properties of these indicators.
The third hypothesis is partially supported. While at the optimal cutoff the S-BLRE has
a modestly superior sensitivity compared to FMS, the specificity is essentially the same.
However, the fact that a single cutoff (<3.68/≥3.68) achieves a slightly better overall
classification accuracy on S-BLRE than a truncated one (<2/≥4) on the FMS may be
construed as an additional source of evidence that multivariate models of performance
validity assessment tend to be superior to single indicators (Larrabee, 2003, 2007).
Of the reference PVTs, the EI-5s consistently produce the highest specificity, despite
having the highest BRFail. This superior specificity often comes at the expense of
sensitivity. Given that the EI-5s contain data on five different validity indicators, it
captures both the number and extent of PVT failures. As such, it provides a more
nuanced measure of test-taking effort. More importantly, it demonstrates that—con-
trary to recent claims that using multiple PVTs results in an inflated false positive rate
(Berthelson, Mulchan, Odland, Miller, & Mittenberg, 2013; Bilder, Sugar, & Hellemann,
2014)—responsible choices of cutoffs in multivariate models of performance validity
assessments not only contain false positive errors (Davis & Millis, 2014; Larrabee,
2014a, 2014b; Silk-Eglit, Stenclik, Miele, Lynch, & McCaffrey, 2015) but may even
reduce them.
At the newly introduced, adjusted cutoffs, the FMS and S-BLRE meet the criteria for
adopting adult PVTs to pediatric patients outlined by Donders (2005). First, BRFail is
very close to the target <10% and comparable to that produced by established PVTs.
CHILD NEUROPSYCHOLOGY 11

Second, performance on the target EVIs is unrelated to age or cognitive functioning,


further demonstrating their ability to dissociate effort from impairment.
Inevitably, this study has a number of limitations. The sample is diagnostically
heterogeneous and geographically restricted. Additionally, reference PVTs were not
administered to every child in the sample, potentially introducing instrumentation-
based method variance. Although the remarkably similar classification accuracy pro-
duced by the two versions of the EI-5 largely alleviates this concern, the effect of
missing data on some of the PVTs is ultimately unknown. The authors are also unaware
of whether or not any participants were involved in litigation or seeking compensation
as part of their evaluation. Such factors may have played a role in their motivation and
effort, thus having an impact on these findings.
The present investigation makes a number of important contributions to the clinical
literature. It introduces two EVIs nested within the WCST to pediatric neuropsychology
that were previously only available for adults. Optimal cutoffs have been identified
through a cross-validation process based on established PVTs. Finally, further evidence
is provided that using a multivariate approach to performance validity assessment does
not inflate false positive rates—on the contrary, it may help to contain them.
In summary, at the new cutoffs optimized for younger examinees, the FMS and
S-BLRE can function as effective validity indicators. While they may be less stable
parameters in children than in adults, they can still make important contributions to
the evaluation of cognitive effort. Given the low cost of obtaining these EVIs (no
additional test to administer, open source, easy to compute) and their clinical utility
in dissociating ability from effort, they are valuable additions to the toolbox of pediatric
neuropsychologists. However, their low and variable sensitivity remains a liability;
therefore, they should never be used in isolation to determine performance validity.

Disclosure Statement
No potential conflict of interest was reported by the authors.

References
Ashendorf, L., O’Bryant, S. E., & McCaffrey, R. J. (2003). Specificity of malingering detection
strategies in older adults using the CVLT and WCST. The Clinical Neuropsychologist, 17(2),
255–262. doi:10.1076/clin.17.2.255.16502
Axelrod, B. N., Meyers, J. E., & Davis, J. J. (2014). Finger tapping test performance as a measure
of performance validity. The Clinical Neuropsychologist, 28(5), 876–888. doi:10.1080/
13854046.2014.907583
Babikian, T., Boone, K. B., Lu, P., & Arnold, G. (2006). Sensitivity and specificity of various digit
span scores in the detection of suspect effort. The Clinical Neuropsychologist, 20, 145–159.
doi:10.1080/13854040590947362
Baker, D. A., Connery, A. K., Kirk, J. W., & Kirkwood, M. W. (2014). Embedded performance
validity indicators within the California Verbal Learning Test, Children’s Version. The Clinical
Neuropsychologist, 28(1), 116–127. doi:10.1080/13854046.2013.858184
Batt, K., Shores, E. A., & Chekaluk, E. (2008). The effect of distraction on the Word Memory Test
and Test of Memory Malingering performance in patients with a severe brain injury. The
Journal of the International Neuropsychological Society, 14(6), 1074–1080. doi:10.1017/
S135561770808137X
12 J. D. LICHTENSTEIN ET AL.

Berthelson, L., Mulchan, S. S., Odland, A. P., Miller, L. J., & Mittenberg, W. (2013). False positive
diagnosis of malingering due to the use of multiple effort tests. Brain Injury, 27, 909–916.
doi:10.3109/02699052.2013.793400
Bilder, R. M., Sugar, C. A., & Hellemann, G. S. (2014). Cumulative false positive rates given
multiple performance validity tests: Commentary on Davis and Millis (2014) and Larrabee
(2014). The Clinical Neuropsychologist, 28(8), 1212–1223. doi:10.1080/13854046.2014.969774
Blaskewitz, N., Merten, T., & Kathmann, N. (2008). Performance of children on symptom
validity tests: TOMM, MSVT, and FIT. Archives of Clinical Neuropsychology, 23(4), 379–391.
doi:10.1016/j.acn.2008.01.008
Boone, K. B. (2009). The need for continuous and comprehensive sampling of effort/response
bias during neuropsychological examinations. The Clinical Neuropsychologist, 23(4), 729–741.
doi:10.1080/13854040802427803
Boone, K. B. (2013). Clinical practice of forensic neuropsychology. New York, NY: Guilford.
Brooks, B. L. (2012). Victoria Symptom Validity Test performance in children and adolescents
with neurological disorders. Archives of Clinical Neuropsychology, 27(8), 858–868. doi:10.1093/
arclin/acs087
Brooks, B. L., & Ploetz, D. M. (2015). Embedded performance validity on the CVLT-C for youth
with neurological disorders. Archives of Clinical Neuropsychology, 30, 200–206. doi:10.1093/
arclin/acv017
Bush, S. S., Ruff, R. M., Troster, A. I., Barth, J. T., Koffler, S. P., Pliskin, N. H., . . . Silver, C.
(2005). Symptom validity assessment: Practice issues and medical necessity (NAN Policy and
Planning Committees). Archives of Clinical Neuropsychology, 20, 419–426. doi:10.1016/j.
acn.2005.02.002
Carone, D. A. (2008). Children with moderate/severe brain damage/dysfunction outperform
adults with mild-to-no brain damage on the Medical Symptom Validity Test. Brain Injury,
22, 960–971. doi:10.1080/02699050802491297
Davis, J. J., & Millis, S. R. (2014). Examination of performance validity test failure in relation to
number of tests administered. The Clinical Neuropsychologist, 28(2), 199–214. doi:10.1080/
13854046.2014.884633
DeRight, J., & Carone, D. A. (2015). Assessment of effort in children: A systematic review. Child
Neuropsychology, 21(1), 1–24. doi:10.1080/09297049.2013.864383
Donders, J. (2005). Performance on the Test of Memory Malingering in a mixed pediatric
sample. Child Neuropsychology, 11, 221–227. doi:10.1080/09297040490917298
Erdodi, L. A., Lichtenstein, J. D., Rai, J. K., & Flaro, L. (2016). Embedded validity indicators in
Conners’ CPT-II: Do adult cutoffs work the same way in children? Applied Neuropsychology:
Child, 1–9. Advance online publication. doi:10.1080/21622965.2016.1198908
Erdodi, L. A., Roth, R. M., Kirsch, N. L., Lajiness-O’Neill, R., & Medoff, B. (2014). Aggregating
validity indicators embedded in Conners’ CPT-II outperforms individual cutoffs at separating
valid from invalid performance in adults with traumatic brain injury. Archives of Clinical
Neuropsychology, 29(5), 456–466. doi:10.1093/arclin/acu026
Erdodi, L. A., Tyson, B. T., Abeare, C. A., Lichtenstein, J. D., Pelletier, C. L., Rai., J. K., & Roth, R.
M. (2016). The BDAE complex ideational material – A measure of receptive language or
performance validity? Psychological Injury and Law, 9, 112–120. doi:10.1007/s12207-016-9254-6
Grant, D. A., & Berg, E. A. (1948). A behavioral analysis of degree of reinforcement and ease of
shifting to new responses in a Weigl-type card-sorting problem. Journal of Experimental
Psychology, 38, 404–411.
Green, P. (2003). Manual for the Word Memory Test. Edmonton, AB: Green’s.
Green, P. (2004). Manual for Green’s Medical Symptom Validity Test (MSVT). Edmonton, AB:
Green’s.
Green, P. (2008). Manual for the Nonverbal Medical Symptom Validity Test. Edmonton, AB:
Green’s.
Green, P., & Flaro, L. (2003). Word Memory Test performance in children. Child
Neuropsychology, 9, 189–207. doi:10.1076/chin.9.3.189.16460
CHILD NEUROPSYCHOLOGY 13

Greve, K. W., & Bianchini, K. J. (2002). Using the Wisconsin Card Sorting Test to detect
malingering: An analysis of the specificity of two methods in nonmalingering normal and
patient samples. Journal of Clinical and Experimental Neuropsychology, 24, 48–54. doi:10.1076/
jcen.24.1.48.968
Greve, K. W., & Bianchini, K. J. (2004). Setting empirical cut-offs on psychometric indicators of
negative response bias: A methodological commentary with recommendations. Archives of
Clinical Neuropsychology, 19, 533–541. doi:10.1016/j.acn.2003.08.002
Greve, K. W., Bianchini, K. J., Mathias, C. W., Houston, R. J., & Crouch, J. A. (2002). Detecting
malingered performance with the Wisconsin Card Sorting Test: A preliminary investigation in
traumatic brain injury. The Clinical Neuropsychologist, 16, 179–191. doi:10.1076/
clin.16.2.179.13241
Greve, K. W., Heinly, M. T., Bianchini, K. J., & Love, J. M. (2009). Malingering detection with the
Wisconsin Card Sorting Test in mild traumatic brain injury. The Clinical Neuropsychologist,
23, 343–362. doi:10.1080/13854040802054169
Greve, K. W., Ord, J., Curtis, K. L., Bianchini, K. J., & Brennan, A. (2008). Detecting malingering in
traumatic brain injury and chronic pain: A comparison of three forced-choice symptom validity
tests. The Clinical Neuropsychologist, 22(5), 896–918. doi:10.1080/13854040701565208
Gligorović, M., & Buha, N. (2013). Conceptual abilities of children with mild intellectual
disability: Analysis of Wisconsin Card Sorting Test performance. Journal of Intellectual and
Developmental Disability, 38(2), 134–140. doi:10.3109/13668250.2013.772956
Gunn, D., Batchelor, J., & Jones, M. (2010). Detection of simulated memory impairment in 6- to 11-
year-old children. Child Neuropsychology, 16(2), 105–118. doi:10.1080/09297040903352564
Harrison, A. G., & Armstrong, I. (2014). WISC-IV unusual digit span performance in a sample
of adolescents with learning disabilities. Applied Neuropsychology: Child, 3, 152–160.
doi:10.1080/21622965.2012.753570
Harrison, A. G., Flaro, L., & Armstrong, I. (2015). Rates of effort test failure in children with
ADHD: An exploratory study. Applied Neuropsychology: Child, 4(3), 197–210. doi:10.1080/
21622965.2013.850581
Heaton, R. K., Chelune, G. J., Talley, J. L., Kay, G. G., & Curtis, G. (1993). Wisconsin Card
Sorting Test (WCST) manual revised and expanded. Odessa, FL: Psychological Assessment
Resources.
Heilbronner, R. L., Sweet, J. J., Morgan, J. E., Larrabee, G., Millis, S., & Participants, C. (2009).
American Academy of Clinical Neuropsychology Consensus Conference statement on the
neuropsychological assessment of effort, response bias, and malingering. The Clinical
Neuropsychologist, 23, 1093–1129. doi:10.1080/13854040903155063
Iverson, G. L. (1998). 21-Item Test research manual. Vancouver, BC: Author.
Jodzio, K., & Biechowska, D. (2010). Wisconsin Card Sorting Test as a measure of executive
function impairments in stroke patients. Applied Neuropsychology, 17(4), 267–277.
doi:10.1080/09084282.2010.525104
King, J. H., Sweet, J. J., Sherer, M., Curtiss, G., & Vanderploeg, R. D. (2002). Validity indicators
within the Wisconsin Card Sorting Test: Application of new and previously researched
multivariate procedures in multiple traumatic brain injury samples. The Clinical
Neuropsychologist, 16(4), 506–523. doi:10.1076/clin.16.4.506.13912
Kirkwood, M. W. (2015). Validity testing in child and adolescent assessment: Evaluating exaggera-
tion, feigning, and noncredible effort. New York, NY: Guilford.
Kirkwood, M. W., Hargrave, D. D., & Kirk, J. W. (2011). The value of the WISC-IV digit span
subtest in detecting noncredible performance during pediatric neuropsychological examina-
tions. Archives of Clinical Neuropsychology, 26, 377–384. doi:10.1093/arclin/acr040
Kirkwood, M. W., & Kirk, J. W. (2010). The base rate of suboptimal effort in a pediatric mild TBI
sample: Performance on the Medical Symptom Validity Test. The Clinical Neuropsychologist,
24(5), 860–872. doi:10.1080/13854040903527287
Larrabee, G. J. (2003). Detecting of malingering using atypical performance patterns on standard
neuropsychological tests. The Clinical Neuropsychologist, 17(3), 410–425. doi:10.1076/
clin.17.3.410.18089
14 J. D. LICHTENSTEIN ET AL.

Larrabee, G. J. (2007). Assessment of malingering. In G. J. Larrabee (Ed.), Forensic neuropsychol-


ogy: A scientific approach. New York, NY: Oxford University Press.
Larrabee, G. J. (2008). Aggregation across multiple indicators improves the detection of mal-
ingering: Relationship to likelihood ratios. The Clinical Neuropsychologist, 22, 666–679.
doi:10.1080/13854040701494987
Larrabee, G. J. (2014a). False-positive rates associated with the use of multiple performance and
symptom validity tests. The Archives of Clinical Neuropsychology, 29(4), 364–373. doi:10.1093/
arclin/acu019
Larrabee, G. J. (2014b). Test validity and performance validity: Considerations in providing a
framework for development of an ability-focused neuropsychological test battery. The Archives
of Clinical Neuropsychology, 29(7), 695–714. doi:10.1093/arclin/acu049
Martin, R. C., Haut, J. S., Stainbrook, T., & Franzen, M. D. (1995). Preliminary normative data
for objective measures to detect malingered neuropsychological deficits in a population of
adolescent patients [Abstract]. Archives of Clinical Neuropsychology, 10, 364–365. doi:10.1093/
arclin/10.4.364
Miele, A. S., Gunner, J. H., Lynch, J. K., & McCaffrey, R. J. (2012). Are embedded validity indices
equivalent to free-standing symptom validity tests? Archives of Clinical Neuropsychology, 27(1),
10–22. doi:10.1093/arclin/acr084
Nelson, N. W., Sweet, J. J., Berry, D. T., Bryant, F. B., & Granacher, R. P. (2007). Response
validity in forensic neuropsychology: Exploratory factor analytic evidence of distinct cognitive
and psychological constructs. Journal of the International Neuropsychological Society, 13(3),
440–449. doi:10.1017/S1355617707070373
Ploetz, D. M., Mazur-Mosiewicz, A., Kirkwood, M. W., Sherman, E. M. S., & Brooks, B. L.
(2016). Performance on the Test of Memory Malingering in children with neurological
conditions. Child Neuropsychology, 22, 133–142. doi:10.1080/09297049.2014.986446
Rabin, L. A., Paolillo, E., & Barr, W. B. (2016). Stability in test-usage practices of clinical
neuropsychologists in the United States and Canada over a 10-year period: A follow-up survey
of INS and NAN members. Archives of Clinical Neuropsychology, 31, 206–230. doi:10.1093/
arclin/acw007
Retzlaff, P., Butler, M., & Vanderploeg, R. D. (1992). Neuropsychological battery choice and
theoretical orientation: A multivariate analysis. The Journal of Clinical Psychology, 48(5), 666–
672. doi:10.1002/(ISSN)1097-4679
Romine, C. B., Lee, D., Wolfe, M. E., Homack, S., George, C., & Riccio, C. A. (2004). Wisconsin
Card Sorting Test with children: A meta-analytic study of sensitivity and specificity. Archives
of Clinical Neuropsychology, 19, 1027–1041. doi:10.1016/j.acn.2003.12.009
Sherman, E. M. S., & Brooks, B. L. (2015). Children and Adolescent Memory Profile (ChAMP)
professional manual. Lutz, FL: PAR.
Silk-Eglit, G. M., Stenclik, J. H., Miele, A. S., Lynch, J. K., & McCaffrey, R. J. (2015). Rates of false-
positive classification resulting from the analysis of additional embedded performance validity
measures. Applied Neuropsychology: Adult, 22(5), 335–347. doi:10.1080/23279095.2014.938809
Slick, D., Hopp, G., Strauss, E., & Thompson, G. (1997). The Victoria Symptom Validity Test.
Odessa, FL: PAR.
Slick, D. J., Sherman, E. M., & Iverson, G. L. (1999). Diagnostic criteria for malingered
neurocognitive dysfunction: proposed standards for clinical practice and research. The
Clinical Neuropsychologist, 13(4), 545–561.
Suhr, J. A., & Boyer, D. (1999). Use of the Wisconsin Card Sorting Test in the detection of
malingering in student simulator and patient samples. Journal of Clinical and Experimental
Psychology, 21(5), 701–708.
Tombaugh, T. (1996). Test of Memory of Malingering (TOMM). North Tonawanda, NY: Multi-
Health Systems.
Welsh, A. J., Bender, H. A., Whitman, L. A., Vasserman, M., & MacAllister, W. S. (2012). Clinical
utility of reliable digit span in assessing effort in children and adolescents with epilepsy.
Archives of Clinical Neuropsychology, 27(7), 735–741. doi:10.1093/arclin/acs063

View publication stats

You might also like