Silva 2019
Silva 2019
Silva 2019
DOI: 10.1002/pits.22306
RESEARCH ARTICLE
1
Department of Counseling and School
Psychology, University of Massachusetts, Abstract
Boston, Massachusetts Recommendations from multiple professional organizations
2
May Institute, Randolph, Massachusetts
(e.g., American Psychological Association, Council for Ex-
3
Department of Educational Psychology,
University of Minnesota, Minneapolis, ceptional Children, National Association of School Psychol-
Minnesota ogists) suggest that collection of data on the social validity in
Correspondence practice and research is necessary. The purpose of this study
Meghan R. Silva, May Institute, 41 Pacella was to systematically review the inclusion of acceptability
Park Drive, Randolph, MA 02368.
Email: msilva@mayinstitute.org measurement, which has been one of the most common way
to measure social validity, within the intervention literature
Present address
Meghan R. Silva, May Institute, Randolph, MA published across five school psychology journals between
Robin S. Codding, Department of Applied 2005 and 2017. Findings suggested just over one third of
Psychology, Northeastern University,
Boston, MA intervention studies included acceptability assessment.
Intervention studies that were delivered individually, tar-
geted behavior skills, and included treatment integrity data
were significantly more likely to include acceptability
assessment. When acceptability was measured it was
typically evaluated one‐time following treatment completion
using self‐report tools completed by teachers. Nearly half of
studies employed one of seven published tools and the
remaining half used researcher‐created measures. The
published tools were adapted in a variety of ways and
inconsistently reported either item or total scores making it
difficult to summarize these data according to intervention
target or delivery format. Implications of findings are
described.
KEYWORDS
acceptability, intervention, social validity, treatment integrity
1 | INTRODUCTION
Acceptability is a component of social validity that has been recommended as part of ongoing intervention
evaluation (American Psychological Association [APA], 2002; Council for Exceptional Children [CEC], 2014;
National Association of School Psychologists [NASP], 2010). Assessing for acceptability is one method to determine
if a socially important outcome was achieved (Kazdin, 1977). To better understand if interventions are meeting this
indicator, acceptability should be measured in the intervention literature, though past reviews suggest it is
infrequently reported (e.g., Villarreal, Ponce, & Gutierrez, 2015). This issue may be because a standardized
approach to the assessment and reporting of acceptability continues to be absent from many applied research
journal guidelines (Callahan et al., 2017; Roach, Wixson, Talapatra, & LaSalle, 2009) despite recommendations to
include acceptability in professional practice and research. Without a standardized approach to the assessment and
reporting of acceptability, it is unclear how practitioners and researchers report and interpret acceptability
findings. To provide a more comprehensive understanding of how researchers measure and report acceptability
assessment in intervention research, we conducted a systematic review of all school‐based intervention studies
across five school psychology journals from 2005 to 2017.
Acceptability was initially conceptualized as a construct within social validity and refers to how an individual judges
the procedures of a treatment or intervention to be appropriate, fair, reasonable, or intrusive (Finn & Sladeczek,
2001; Kazdin, 1980). For interventions to be considered socially valid, the intervention goals need to be congruent
with societal goals, the procedures need to be socially appropriate and acceptable to the participants, and the
effects of the intervention should be satisfying to the participants (Wolf, 1978). Acknowledging this importance,
multiple professional organizations highlight the critical nature of social validity and, more specifically, acceptability
when it comes to professional practice and research guidelines. For example, APA (2006), suggests all intervention
evaluation should be based on both efficacy and clinical utility, which includes the generalizability, feasibility, and
acceptability of an intervention. Similarly, NASP (2010) proposes assessing for acceptability to be a professional
responsibility that should permeate all aspects of intervention delivery (i.e., planning, implementation, evaluation).
Further, CEC (2014), suggests studies examining the effect of an intervention on student outcomes address two
dimensions of social validity: (a) socially important outcomes (e.g., improved quality of life) and (b) meaningful
magnitude of change in the dependent variable for study participants. CEC recommends subjective evaluation (e.g.,
acceptability assessment) as one way to demonstrate the socially important outcomes.
Kazdin (1977) also suggested subjective evaluation as a primary method for measuring acceptability. Subjective
evaluation has mainly resulted in acceptability assessments consisting of self‐report questionnaires that are
summarized into an overall acceptability score (Eckert & Hintze, 2000; Finn & Sladeczek, 2001). With a diverse pool
of students and stakeholders, perceptions of intervention procedures will likely differ due to variances in
knowledge accumulation, exposure to different experiences, and beliefs about the intervention. The assessment of
acceptability may bring to awareness to components that are critical to intervention planning, support,
sustainability, and evaluation (APA, 2002; Cross Calvert & Johnston, 1990).
Collecting stakeholders’ acceptability on intervention outcomes has informed intervention implementation factors.
Several reviews of the acceptability literature, including both analog and naturalistic studies, have been conducted
SILVA ET AL. | 3
over the past 30 years (e.g., Carter, 2007; Miltenberger, 1990). Miltenberger (1990) conducted a review of the
acceptability research from the 1980s and found the interventions most likely to be rated as acceptable were those
found to have limited negative side effects, require minimal time to implement, were least restrictive and
disruptive, fit with the orientation of the intervention implementer, considered necessary to improve outcomes,
and perceived to be the most effective options. Carter (2007) expanded the review conducted by Miltenberger and
likewise found that numerous factors influence acceptability ratings including severity of the problem, type of
treatment, intrusiveness of the intervention, level of problem severity, professional affiliation, and professional
expertise. Furthermore, these data suggested factors related to treatments, clients, and raters have all been shown
to influence acceptability ratings. Because of individual differences, each person may place different values on the
aforementioned factors which may lead to variation in acceptability ratings and ultimately intervention integrity,
effectiveness, and use.
Conceptual models of acceptability propose a bidirectional and interdependent relationship between
intervention acceptability, integrity, effectiveness, and use (Eckert & Hintze, 2000; Reimers, Wacker, & Koeppl,
1987; Witt & Elliott, 1985). That is, if an intervention is considered to be acceptable, it is expected to increase the
likelihood of an intervention being used and implemented fully, which in turn is expected to lead to improved
outcomes and even greater acceptability. Studies that have examined the influence of acceptability on treatment
integrity present a more nuanced picture (e.g., Allinder & Oats, 1997; Dart, Cook, Collins, Gresham, & Chenier,
2012; Mautone et al., 2009; Peterson & McConnell, 1996; Sterling‐Turner & Watson, 2002). Some researchers have
found small to moderate, yet significant, positive relationships between acceptability and treatment integrity (e.g.,
Allinder & Oats, 1997; Dart et al., 2012; Mautone et al., 2009). Higher acceptability ratings have been found to
correlate with higher ratings of intervention and assessment implementation and effectiveness with parents
(Reimers, Wacker, Cooper, & DeRaad, 1992) and teachers (Allinder & Oats, 1997; Mautone et al., 2009). However,
other research has not demonstrated a significant relationship between acceptability and treatment integrity (e.g.,
Peterson & McConnell, 1996; Sterling‐Turner et al., 2002). Potential explanations for the mixed evidence include
changes in acceptability ratings over time (e.g., Mautone et al., 2009), analog (e.g., Sterling‐Turner & Watson, 2002)
versus naturalistic investigations (e.g., Allinder & Oats, 1997), and exposure to multiple interventions (Dart et al.,
2012). This inconclusive evidence suggests a need for further research.
4 | A C C E P T A B I L I T Y AS S E S S M E N T
To evaluate and better understand the relationship between acceptability and implementation, acceptability must
be assessed and reported. Most intervention research that has included acceptability assessment data utilizes
previously published rating scales (Carter, 2007; Finn & Sladeczek, 2001). Since the 1980s, researchers have
systematically developed acceptability measures to assess perceptions of intervention procedures (Finn &
Sladeczek, 2001). To highlight a few, Kazdin (1980) developed the Treatment Evaluation Inventory (TEI), a 15‐item
measure to evaluate acceptability of interventions to address children’s behavior problems. The Treatment
Acceptability Rating Form (TARF; Reimers & Wacker, 1988), a 15‐item measure, and the TARF‐R (Reimers et al.,
1992) a 20‐item measure, were developed to incorporate numerous dimensions of acceptability (Finn & Sladeczek,
2001). Specific to school‐based interventions, Witt and colleagues developed several measures of intervention
acceptability including the Intervention Rating Profile (IRP; Witt & Martens, 1983), Children’s Intervention Rating
Profile (CIRP; Witt & Elliott, 1985), and Behavior Intervention Rating Scale (BIRS; Von Brock & Elliott, 1987).
Taking a broader view of intervention acceptability, Chafouleas and colleagues developed a suite of acceptability
measures to evaluate adults’ perceptions of interventions (Usage Rating Profile—Intervention Revised [URP‐IR];
Briesch, Chafouleas, Neugebauer, & Riley‐Tillman, 2013), students’ perceptions of interventions (Children’s Usage
Rating Profile [CURP], Briesch & Chafouleas, 2009), and adults’ perceptions of assessments (Usage Rating Profile—
Assessment [URP‐A]; Miller, Neugebauer, Chafouleas, Briesch, & Riley‐Tillman, 2013). Recently, Eckert, Hier,
4 | SILVA ET AL.
Hamsho, and Malandrino (2017) evaluated a measure of students’ perceptions of academic interventions (Kids
Intervention Profile). These measures provide researchers with a range of options for assessing acceptability, but
all remain paper‐and‐pencil self‐report questionnaires, despite calls from researchers to more dynamically and
robustly assess this multidimensional construct (Finn & Sladeczek, 2001).
Whichever approach to acceptability assessment researchers employ, it must be also reported in journal articles to
contribute to the broader literature. Before Wolf (1978) advocated for the assessment of social validity, researchers and
readers had been individually responsible for determining the importance of an intervention or strategy based on their
personal perceptions. The concept that the actual consumers of the intervention could provide valuable input about the
intervention was a major shift in how researchers viewed the importance of an intervention (Finney, 1991). However,
research suggests that including the acceptability data of consumers continues to be neglected in the literature.
Preliminary data on acceptability assessment within the intervention research has been examined in three systematic
reviews. First, in documenting consultation research published from 1985 to 1995, Sheridan, Welch, and Orme (1996)
reported consumer acceptability and satisfaction was evaluated in 48% of the 46 consultation studies. This level of
acceptability reporting was greater than reporting on the social meaningfulness of the treatment outcomes (37%),
treatment integrity (26%), and generalization (6%) in these same studies. Second, Roach et al. (2009) examined research
articles from four major school psychology journals between 2002 and 2007 and found only 16% of the published
articles included the voices of students in school psychology research. In this review, student acceptability was defined
as the examination of student experiences and perceptions through interviews, surveys, or questionnaires. Third,
Villarreal et al. (2015) looked at the inclusion of acceptability data in intervention research from 2005 to 2014 in six
school psychology journals. Quantitative acceptability data were included in 30.5% of the 243 studies, most often by
teachers and using a published treatment acceptability instrument. Acceptability, without quantitative data, was
mentioned in 5.8% of the studies and not mentioned in 60.38% of studies. These reviews provide a broad sense of the
lack of inclusion of acceptability data in school psychology research but provide only limited information about how
acceptability is assessed when it is reported.
5 | PU RP OSE
Acceptability is a critical component of evaluating an intervention (APA, 2002; NASP, 2010). The relationship
between acceptability, implementation, and outcomes is not clear, suggesting the need for additional research in
this area (Finn & Sladeczek, 2001; Kazdin, 1980). Although previously conducted reviews present valuable
preliminary information regarding the prevalence of acceptability assessment within the consultation and
intervention literature, questions remain about how intervention studies assess for acceptability, report
acceptability data, and determine whether an intervention was considered acceptable. To do so requires the
analysis of use and measurement of specific acceptability tools, as well as considering the level of acceptability
reported in studies (i.e., how acceptable did the participants find the interventions). For example, when a published
rating scale is used to assess for acceptability, determining if the scale was adapted or modified, as well as the
timing of the assessment (e.g., pre, post, pre and post), and how the data were reported (e.g., total score or item
score) will all impact how researchers determine if a socially important outcome was achieved. In addition, it is
valuable to compare characteristics of intervention studies that did and did not include acceptability, to provide
context that clarifies the circumstances under which acceptability data are more likely to be collected. In this way,
the present review of the literature provides a detailed view of the current acceptability assessment and its
inclusion in the school psychology literature. Research questions included:
● What are the study, participant, and intervention characteristics of school psychology intervention studies
generally and those that assess acceptability specifically?
● How do these characteristics vary depending on the inclusion of acceptability assessment?
6 | METHOD
arose, the third author determined article inclusion. To evaluate reliability of the article coding, 26.49% of the 268
studies (n = 71) were coded by the two graduate student raters resulting in 89.29% agreement.
7 | RES U LTS
First, we report characteristics of intervention studies. Second, we describe the characteristics of the intervention
studies that included acceptability. Third, we compare the inclusion of acceptability studies across study,
participant, and intervention characteristics to the studies that that did not include acceptability assessment. Last,
we report on acceptability assessment and summarize the reported acceptability data. Table 2 provides participant
and intervention characteristics across all studies and Table 3 provides characteristics of acceptability assessment.
in elementary grades (n = 66; 61.11%). Just over half of studies included students with disabilities (n = 62; 57.41%).
Teachers (n = 50; 46.63%) were most often interventionists followed by others such as parent and peers (n = 35;
32.41%), or researchers (n = 30; 27.78%). Interventions were most often delivered in the general education
classroom (n = 41; 37.96%), in a class‐wide (n = 49; 45.37%) or individual (n = 36; 33.33%) formats. The common
intervention target was behavior (n = 60; 55.56%) followed by academic skills (n = 38; 35.18%). A large proportion
of studies that reported acceptability also included quantitative treatment integrity data (n = 91; 84.26%).
SILVA ET AL. | 9
7.3 | Comparing school psychology intervention studies with and without acceptability
Comparisons between intervention articles that did and did not report acceptability were conducted using χ2
analyses. No significant differences were found for the year of publication, grade, disability status, interventionist,
or setting of the intervention. Three significant relationships were found and subsequent 2 × 2 analyses conducted.
First, a significant relationship was found with intervention delivery χ2(3) = 14.43, p = .002. Acceptability was more
likely to be assessed in individual interventions than in small group interventions χ2(1) = 6.37, p = .011. Acceptability
was reported more often in individual interventions compared to school‐wide and/or a combination of intervention
formats, χ2(1) = 9.21, p = .002. Class‐wide interventions were more likely to include acceptability assessment
compared to school‐wide and/or a combination of intervention formats, χ2(1) = 4.47, p = .034. Second, a significant
relationship was found with intervention skill assessed, χ2(4) = 14.34, p = .006. Acceptability was significantly more
likely to be assessed in studies targeting behavior than in studies targeting academic skills, χ2(1) = 7.88, p = .004.
Studies targeting behavioral skills were more likely to include acceptability data compared to mental health
10 | SILVA ET AL.
15
Intervention
10 Articles with
Acceptability
5
0
2005 2007 2009 2011 2013 2015 2017
Year
FIGURE 1 Intervention articles in school psychology journals that include acceptability by year
interventions χ2(1) = 7.42, p = .006. Acceptability was more likely to be included in studies targeting engagement
compared to mental health, χ2(1) = 3.88, p = .048. Last, a significant relationship was found with the inclusion of
treatment integrity, χ2(2) = 33.83, p = <.000001. Treatment integrity was significantly more likely to be included in
articles that assessed for acceptability compared to articles that did not assess for acceptability, χ2(1) = 29.61,
p = < .000001.
100
90
80
Number of articles
70
60
50 Intervention Studies
40
Intervention Studies with
30 Acceptability
20
10
0
JSP PITS SPI SPQ SPR
School Psychology Journals
of studies (59.43%) did not report psychometric data. More than half of the studies presented item scores (n = 64;
60.38%). Acceptability was most often assessed after intervention completion (n = 74; 68.52%).
8 | D IS C U S S IO N
The purpose of this paper was to examine the inclusion and nature of acceptability measures within intervention
studies published in five school psychology journals between 2005 and 2017. This study extends previous reviews
of acceptability data in three ways: (a) comparing characteristics of intervention studies that did and did not include
12 | SILVA ET AL.
acceptability, (b) describing the measurement characteristics of acceptability tools, and (c) discussing the use and
delivery of assessment tools.
Over a 12‐year span, an average of about 20 intervention articles were published per year. Of the 268
intervention articles, most studies occurred in public schools with general education students. The most common
form of intervention delivery was to the individual student, followed closely by class‐wide format, with behavioral
skills and academic skills nearly equally represented. This provides valuable information about the focus of school
psychology research as a whole. Teachers followed by researchers were the most common implementers. Results
found a majority of intervention studies included treatment integrity data (62.69%). This is slightly higher than the
results found by Sanetti, Gritter, and Dobey (2011) who reviewed school psychology intervention research from
1995 to 2008 and found 50.2% of the studies included treatment integrity. This level of inclusion may be the result
of increased attention to the importance of treatment integrity in recent years (e.g., DiGennaro Reed & Codding,
2014; Sanetti et al., 2011).
Consistent with previous reviews (Sheridan et al., 1996; Villarreal et al., 2015) just over one third of
intervention studies reported acceptability, most often using a self‐report measure. It is notable that, unlike
treatment integrity, the level of reporting acceptability has not changed since the mid‐1980s (Sheridan et al., 1996).
Teachers were asked to report acceptability in approximately three quarters of these studies even though teachers
were the primary implementer in slightly less than half of studies. When teachers are not the primary implementers
or intervention target, it is interesting to consider what their perspective of the intervention acceptability
represents (e.g., likelihood of future adoption, potential for being the primary implementer, perspective on student
acceptability). Also, evaluating a teacher’s acceptability may be considered more feasible to researchers than
soliciting an entire class’s opinions or eliciting parents’ perspectives.
Acceptability was gathered for students in more than half of these studies with parent input less frequently
solicited. Although this current analysis of the literature suggests that student acceptability is more frequently
collected than in other reviews (i.e., Roach et al., 2009; Villarreal et al., 2015), it is surprising that student
perceptions of intervention participation is not more commonly assessed. Including the perspectives of students
may be difficult for schools as traditionally as there has been “a constant tug‐of‐war between regulating children
and promoting their independence and growth” (Shriberg & Desai, 2014, p. 8). The meaningful participation of
students in decisions affecting them is recognized internationally as the right of all children (United Nations, 1989).
Supporting children to communicate their views through acceptability assessment is one avenue through which
student voices can be heard and respected, particularly when adults are making decisions impacting children
(Nastasi & Naser, 2014; UNICEF, 2014).
reason for similar levels of reporting, this study should be carefully examined to elucidate the relationship between
these variables.
treatment, acceptability assessment has primarily relied on published acceptability rating scales. By not having
acceptability rating scales readily available, it may contribute to the large percentage of articles relying on
researcher‐developed acceptability tools, as well as contribute to the inconsistencies in acceptability measurement
we found in this review. Further, having access to published acceptability rating scales allows researchers and
practitioners to use these in the intervention planning process to determine if there are potential barriers, as well
as contribute to the pool of data regarding the acceptability of certain interventions.
OR CID
REFERENC ES
Allinder, R. M., & Oats, R. G. (1997). Effects of acceptability on teachers’ implementation of curriculum‐based measurement
and student achievement in mathematics computation. Remedial and Special Education, 18, 113–120. https://doi.org/10.
1177/074193259701800205
American Psychological Association (APA). (2002). Criteria for evaluating treatment guidelines. American Psychologist, 57,
1052–1059. https://doi.org/10.1037/0003‐066X.57.12.1052
American Psychological Association (APA). (2006). Evidence‐based practice in psychology. American Psychologist, 61,
271–285. https://doi.org/10.1037/0003‐066X.61.4.271
Briesch, A. M., & Chafouleas, S. M. (2009). Exploring student buy‐in: Initial development of an instrument to measure
likelihood of children’s intervention usage. Journal of Educational and Psychological Consultation, 19, 321–336. https://
doi.org/10.1080/10474410903408885
SILVA ET AL. | 15
Briesch, A. M., Chafouleas, S. M., Neugebauer, S. R., & Riley‐Tillman, T. C. (2013). Assessing influences on intervention
implementation: Revision of the usage rating profile‐intervention. Journal of School Psychology, 51, 81–96. https://doi.
org/10.1016/j.jsp.2012.08.006
Callahan, K., Hughes, H. L., Mehta, S., Toussaint, K. A., Nichols, S. M., Ma, P. S., … Wang, H. T. (2017). Social validity of
evidence‐based practices and emerging interventions in autism. Focus on Autism and Other Developmental Disabilities, 32,
188–197.
Carter, S. L. (2007). Review of recent treatment acceptability research. Education and Training in Developmental Disabilities,
42, 301–316.
Council for Exceptional Children (CEC). (2014). Council for exceptional children: Standard of evidence‐based practices in
special education. Teaching Exceptional Children, 46, 206–212.
Cross Calvert, S., & Johnston, C. (1990). Acceptability of treatments for child behavior problems: Issues and implications for
future research. Journal of Clinical Child Psychology, 19, 61–74. https://doi.org/10.1207/s15374424jccp1901_8
Dart, E. H., Cook, C. R., Collins, T. A., Gresham, F. M., & Chenier, J. S. (2012). Test driving interventions to increase
treatment integrity and student outcomes. School Psychology Review, 41, 467–481.
DiGennaro Reed, F. D., & Codding, R. S. (2014). Advancements in procedural fidelity assessment and intervention:
Introduction to the special issue. Journal of Behavioral Education, 23, 1–18. https://doi.org/10.1007/s10864‐013‐9191‐3
Eckert, T. L., Hier, B. O., Hamsho, N. F., & Malandrino, R. D. (2017). Assessing children’s perceptions of academic
interventions: The Kids Intervention Profile. School Psychology Quarterly, 32, 268–281. https://doi.org/.org.ezp3.lib.umn.
edu/10.1037/spq0000200
Eckert, T. L., & Hintze, J. M. (2000). Behavioral conceptions and applications of acceptability: Issues related to service
delivery and research methodology. School Psychology Quarterly, 15, 123–148. https://doi.org/10.1037/h0088782
Elliott, S. N. (1988). Acceptability of behavioral treatments: Review of variables that influence treatment selection.
Professional Psychology: Research and Practice, 19, 68–80. https://doi.org/10.1037/0735‐7028.19.1.68
Fawcett, S. B. (1991). Social validity: A note on methodology. Journal of Applied Behavior Analysis, 24, 235–239. https://doi.
org/10.1901/jaba.1991.24‐235
Finn, C. A., & Sladeczek, I. E. (2001). Assessing the social validity of behavioral interventions: A review of treatment
acceptability measures. School Psychology Quarterly, 16, 176–206. https://doi.org/10.1521/scpq.16.2.176.18703
Finney, J. W. (1991). On further development of the concept of social validity. Journal of Applied Behavior Analysis, 24,
245–249. https://doi.org/10.1901/jaba.1991.24‐245
Foster, S. L., & Mash, E. J. (1999). Assessing social validity in clinical treatment research: Issues and procedures. Journal of
Consulting and Clinical Psychology, 67, 308–319. https://doi.org/10.1037/0022‐006X.67.3.308
Hier, B. O., & Eckert, T. L. (2014). Evaluating elementary‐aged students’ abilities to generalize and maintain fluency gains of
a performance feedback writing intervention. School Psychology Quarterly, 29, 488–502. https://doi.org/10.1037/
spq0000040
Hier, B. O., & Eckert, T. L. (2016). Programming generality into a performance feedback writing intervention: A randomized
controlled trial. Journal of school psychology, 56, 111–131.
Kazdin, A. E. (1977). Assessing the clinical or applied importance of behavior change through social validation. Behavior
Modification, 1, 427–452.
Kazdin, A. E. (1980). Acceptability of alternative treatments for deviant child behavior. Journal of Applied Behavior Analysis,
13(2), 259–273.
Kennedy, C. H. (2002). The maintenance of behavior change as an indicator of social validity. Behavior Modification, 26,
594–604.
Kelley, M. L., Heffer, R. W., Gresham, F. M., & Elliott, S. N. (1989). Development of a modified treatment evaluation
inventory. Journal of Psychopathology and Behavioral Assessment, 11(3), 235–247.
Long, A. C. J., Sanetti, L. M. H., Collier‐Meek, M. A., Gallucci, J., Altschaefl, M., & Kratochwill, T. R. (2016). An exploratory
investigation of teachers' intervention planning and perceived implementation barriers. Journal of School Psychology, 55,
1–26.
Mautone, J. A., DuPaul, G. J., Jitendra, A. K., Tresco, K. E., Junod, R. V., & Volpe, R. J. (2009). The relationship between
treatment integrity and acceptability of reading interventions for children with Attention‐Deficit/Hyperactivity
Disorder. Psychology in the Schools, 46, 919–931.
Martens, B. K., Witt, J. C., Elliott, S. N., & Darveaux, D. X. (1985). Teacher judgments concerning the acceptability of school‐
based interventions. Professional Psychology: Research and Practice, 16, 191–198. https://doi.org/10.1037/0735‐7028.
16.2.191
Miller, F. G., Neugebauer, S. R., Chafouleas, S. M., Briesch, A. M., & Riley‐Tillman, T. C. (2013). Examining innovation usage:
Construct validation of the Usage Rating Profile ‐ Assessment. Poster presentation at the American Psychological
Association Annual Convention, Honolulu, HI.
16 | SILVA ET AL.
Miltenberger, R. G. (1990). Assessment of treatment acceptability: A review of the literature. Topics in Early Childhood
Special Education, 10, 24–38.
Nastasi, B. K., & Naser, S. (2014). Child rights as a framework for advancing professional standards for practice, ethics, and
professional development in school psychology. School Psychology International, 35, 36–49.
National Association of School Psychologists (NASP). (2010). Model for comprehensive and integrated school psychological
services. Retrieved from http://www.nasponline.org/stan‐dards/practice‐model/
Peterson, C. A., & McConnell, S. R. (1996). Factors related to intervention integrity and child outcome in social skills
interventions. Journal of Early Intervention, 20, 146–164.
Reimers, T. M., & Wacker, D. P. (1988). Parents’ ratings of the acceptability of behavioral treatment recommendations
made in an outpatient clinic: A preliminary analysis of the influence of treatment effectiveness. Behavioral Disorders, 14,
7–15.
Reimers, T. M., Wacker, D. P., Cooper, L. J., & DeRaad, A. O. (1992). Acceptability of behavioral treatments for children:
Analog and naturalistic evaluations by parents. School Psychology Review, 21, 628–643.
Reimers, T. M., Wacker, D. P., & Koeppl, G. (1987). Acceptability of behavioral interventions: A review of the literature.
School Psychology Review, 16, 212–227.
Roach, A. T., Wixson, C. S., Talapatra, D., & LaSalle, T. P. (2009). Missing voices in school psychology research: A review of
the literature 2002–2007. The School Psychologist, 63, 5–10.
Sanetti, L. M. H., Gritter, K. L., & Dobey, L. M. (2011). Treatment integrity of interventions with children in the school
psychology literature from 1995 to 2008. School Psychology Review, 40, 72–84.
Schwartz, I. S., & Baer, D. M. (1991). Social validity assessments: Is current practice state of the art? Journal of Applied
Behavior Analysis, 24, 189–204. https://doi.org/10.1901/jaba.1991.24‐189
Sheridan, S. M., Welch, M., & Orme, S. F. (1996). Is consultation effective?: A review of outcome research. Remedial and
Special Education, 17, 341–354.
Shriberg, D., & Desai, P. (2014). Bridging social justice and children’s rights to enhance school psychology scholarship and
practice: Social Justice and Children’s Rights. Psychology in the Schools, 51, 3–14.
Sterling‐Turner, H. E., & Watson, T. S. (2002). An analog investigation of the relationship between treatment acceptability
and treatment integrity. Journal of Behavioral Education, 11, 39–50.
Turco, T. L., & Elliott, S. N. (1986). Students' acceptability ratings of interventions for classroom misbehaviors: A study of
well‐behaving and misbehaving youth. Journal of Psychoeducational Assessment, 4(4), 281–289.
UNICEF. (2014). Rights under the convention on the rights of the child. Available from http://www.unicef.org/crc/index_30177.
html
United Nations. (1989). Convention on the rights of the child. Available from http://www2.ohchr.org/english/law/crc.htm
Villarreal, V., Ponce, C., & Gutierrez, H. (2015). Treatment acceptability of interventions published in six school psychology
journals. School Psychology International, 36, 322–332. https://doi.org/10.1177/0143034315574153
Von Brock, M. B., & Elliott, S. N. (1987). Influence of treatment effectiveness information on the acceptability of classroom
interventions. Journal of School Psychology, 25, 131–144.
Witt, J. C., & Elliott, S. N. (1985). Acceptability of classroom management strategies. In Kratochwill, T. R. (Ed.), Advances in
school psychology (Vol. 4, pp. 251–288). Hillsdale, NJ: Erlbaum.
Witt, J. C., & Martens, B. K. (1983). Assessing the acceptability of behavioral interventions used in classrooms. Psychology in
the Schools, 20, 510–517.
Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its
heart. Journal of Applied Behavior Analysis, 11, 203–214.
How to cite this article: Silva MR, Collier‐Meek MA, Codding RS, DeFouw ER. Acceptability assessment of
school psychology interventions from 2005 to 2017. Psychol Schs. 2019;1–16.
https://doi.org/10.1002/pits.22306