Psychology Assessment - I - M2
Psychology Assessment - I - M2
Psychology Assessment - I - M2
STRUCTURE
2.0 Learning objectives
2.1 Introduction
2.2 Referral sources
2.3 Components of assessment
2.4 Planning- data gathering- analysis (qualitative, quantitative); interpretation- reporting
2.5 Factors influencing assessment.
2.6 Psychological report- purpose, nature, style, common errors.
2.7 Summary
2.8 Keywords
2.9Learning Activity
2.10 Unit End Questions
2.11 References
Each psychologist who appears in court must have their credentials confirmed. Clinical
experience in treating specialist illnesses and related publication credits are important factors
to examine. Psychologists' legal work is often regarded positively by the courts, and they
may have achieved parity with psychiatrists (Sales & Miller, 1994).
A typical educational placement starts with a trip to the classroom to observe a child's
conduct in natural settings. Observing the interaction between the teacher and the child is an
important part of this visit. Behavioral issues are frequently linked to the child–teacher
relationship. The teacher's response style to a student might sometimes be as big a part of the
problem as the student. As a result, classroom observations can be upsetting for teachers and
should be handled with care.
In many ways, observing the child in a larger setting runs counter to the history of individual
assessment. Individual testing, on the other hand, typically yields a limited and narrow range
of data, particularly because children are not credible self-reporters and parents or caregivers
may be biased. Additional critical data may be acquired if testing is integrated with a family
or school assessment, albeit there may be strong opposition. This opposition may be due to
legal or ethical limitations on the scope of services that the school can give or the demands
that a psychologist can make on the parents of the youngster. Often, the first focus is on the
student as a "problem child" or "designated patient," and the need to see him or her as such.
Larger, more complex, and still more important issues, such as marital strife, a disturbed
teacher, misunderstandings between teacher and parents, or a conflict between the school and
the parents, may be obscured by this focus. All or some of the individuals involved may have
a vested interest in seeing the student as the problem, rather than realizing that a
dysfunctional educational system or family issues may be to blame. With outstanding results,
an individually directed evaluation can be carried out.
The assessment may be ineffectual in tackling both the individual difficulties and the bigger
organizational or interpersonal problems unless wider circumstances are evaluated,
comprehended, and addressed.
Behavioral observations, a test of intellectual abilities such as the Weschsler Intelligence
Scale for Children–V, Stanford Binet–V, Woodcock-Johnson Psycho educational Battery–IV
(Woodcock, Schrank, Mather, & McGrew, 2014), or Kaufman Assessment Battery for
Children–II (K-ABC-II; Kaufman & Kaufman, 2004), and tests of emotional and behavioral
functioning are included in most school assessments of children. In the past, projective
techniques were commonly used to measure children's emotional functioning. Many
projective tests, on the other hand, have been found to have poor psychometric qualities and
to take a long time to administer, score, and interpret. As a result, projective instruments are
being phased out in favor of a wide range of behavioral rating instruments (Kamphaus,
Petoskey, & Rowe, 2000). The Achenbach Child Behavior Checklist (Achenbach &
Rescorla, 2001), the Conners–3 Parent and Teacher Rating Scales (Conners, 2008), and the
Behavior Assessment System for Children–3 are just a few examples (BASC-3; C. R.
Reynolds &Kamphaus, 2015). A variety of reliable objective instruments have also been
created, such as the Personality Inventory for Children–2 (PIC-2; Lachar& Gruber, 2001).
This questionnaire is identical to the MMPI, however it is completed by a child's parent. It
generates four validity scores for detecting faking as well as 12 clinical scales for depression,
family relations, delinquency, anxiety, and hyperactivity, among others. The Millon
Adolescent Clinical Inventory (MACI) or the MMPI-A can be used to assess adolescent
personality (MACI; Millon, 1993). The Vineland Adaptive Behavior Measures–II (Sparrow,
Cicchetti, &Balla, 2005), the Wechsler Individual Achievement Test–III (WIAT-III; Pearson,
2009a), and the Wide Range Achievement Test–IV are three more well-designed scales that
are increasingly employed (WRAT-IV; Wilkinson & Robertson, 2007).
Any report created for an educational context should emphasize a child's strengths as well as
his or her flaws. Understanding a child's strengths has the ability to boost a child's self-
esteem as well as effect change in a broad sense. Realistic and practical recommendations
should be made. When a clinician has a complete awareness of relevant resources in the
community, the school system, and the classroom environment, they can offer
recommendations more effectively. This is especially essential because the quality and
resources available in one school or school system may be vastly different from those
available in another. Recommendations usually state which skills must be learned, how they
should be learned, a hierarchy of objectives, and ways for minimizing difficult-to-learn
behaviors.
Only when a regular class would plainly not be as beneficial can a recommendation for
special education be made. The recommendations, on the other hand, are not the final result.
They are starting points that should be expanded and amended based on the preliminary
findings. A psychological report should, ideally, be followed up with ongoing monitoring.
Children's psychoeducational assessments should be done in two stages. The nature and
quality of the child's learning environment should be assessed in the first phase. It is
impossible to expect a youngster to perform effectively if he or she is not exposed to proper
quality training. As a result, it must first be proven that a youngster is having difficulty
despite receiving adequate teaching. The second phase entails a complete assessment battery
that includes tests of intellectual capacities, academic skills, adaptive behavior, and the
detection of any biological problems that could interfere with learning. Memory, spatial
organization, abstract reasoning, and sequencing are examples of cognitive talents. Students
will not function successfully regardless of their academic or intellectual aptitude unless they
have relevant adaptive qualities, such as social skills, adequate motivation and attention, and
the capacity to regulate urges. Assessing a child's educational beliefs and attitudes is critical
because they affect whether the student is willing to use whatever resources are available to
him or her. Similarly, a person's level of personal efficacy plays a role in determining
whether or not he or she is capable of engaging in actions that lead to the achievement of the
goals that are important to them. Poor vision, hearing loss, hunger, excessive weariness,
malnutrition, and endocrine dysfunction are all physical issues that can make learning
difficult.
The foregoing principles clearly position children's assessment in educational settings in a far
broader context than just interpreting test scores. Relationships between the teacher, family,
and student, as well as the relative quality of the learning environment, must be evaluated. In
addition, the child's values, motivation, and sense of personal efficacy, as well as any
biological issues, must be considered. Examiners must learn about school and community
resources, as well as population-specific instruments with a high level of reliability and
validity.
5) Psychological Clinic
In contrast to medical, legal, and educational settings, where the psychologist is often
consulted by the decision maker, the psychologist working in a psychological clinic is
frequently the decision maker. There are several types of referrals that come into the
psychiatric clinic on a regular basis. Individuals who are self-referred and seeking assistance
from psychological distress are perhaps the most common. Extensive psychological testing
may not be necessary or even desirable for many of these people, as their diagnoses and
difficulties are likely to be straightforward, and time spent on testing could be better spent on
therapy.
Brief questionnaires aimed at assessing client characteristics most relevant to treatment
planning, on the other hand, can aid in the development of therapies that will both accelerate
the rate of progress and optimize the outcome (see Chapters 13 and 14). Brief instruments
can also be used to monitor therapeutic response or advise relevant changes, improving the
chances of a successful intervention (Lambert & Hawkins, 2004).
Furthermore, a psychologist may query if the treatment available at a psychological clinic is
appropriate for particular groups of self-referred clients. These customers can include those who
have serious medical issues, people who have legal issues that need to be clarified, and people
who need a greater level of care. In certain situations, psychological testing may be required to
acquire additional information. The testing's primary goal would be to assist in decision-making
rather than to provide direct assistance to the client. Others who may benefit from psychological
testing in clinics are individuals who are being seen in the clinic already, either because their
diagnoses are ambiguous or because their therapy has stagnated or plateaued. A full review could
provide clear direction in these situations.
Children, who are referred by their parents for educational or behavioral difficulties, as well as
referrals from other decision makers, are two more scenarios in which psychological testing may
be necessary. Special care must be taken before testing when referrals are made for poor school
performance or legal issues. First and foremost, the physician must gain a thorough grasp of the
client's social network as well as the reason for the referral. This comprehensive understanding
could include a review of previous therapy attempts as well as a synopsis of the connection
between the parents, school, courts, and child. A referral usually occurs at the end of a long
series of events, and it is critical to acquire information about these occurrences. The clinician
may elect to meet with other individuals who have gotten involved in the case, such as the school
principal, previous therapists, probation officer, attorney, or teacher, after the basis for the
referral has been addressed. This conference could reveal a variety of concerns that demand
decisions, such as referral for family therapy, special education placement, a change in custody
agreements between divorced parents, individual counselling for other family members, and a
change in school. All of these factors may have an impact on the testing's relevance and
methodology, but they may not be obvious if the initial referral question is taken at face value.
Referrals from other decision-makers are occasionally made to psychologists. An attorney, for
example, may wish to determine if a person is competent to stand trial. Other referrals can come
from a physician who wants to know if a patient with a head injury can return to work or drive,
or if the physician needs to track changes in the patient's rehabilitation.
So far, the focus of this discussion on the many circumstances in which psychological testing is
utilized has been on when to test and how to emphasize how tests can be most useful in making
decisions. There are a few more summary remarks that should be made. As previously stated, a
referral source may not always be able to appropriately simulate the referral inquiry. In truth, the
question of referral is rarely obvious or simple. It is the evaluator's job to examine beyond the
referral question and determine the referral's foundation in its broadest sense. As a result,
psychologists must have a thorough grasp of the client's social environment, including
interpersonal issues, familial dynamics, and the chain of events that led to the referral. A second
key aspect is that psychologists are accountable for establishing knowledge about the situation
for which they are writing their reports, in addition to explaining the referral question. Learning
the right terminology, the roles of the individuals working in the setting, the decisions faced by
decision makers, and the philosophical and theoretical ideas they hold are all part of this
information. It's also crucial for clinicians to grasp the principles that underpin the setting and
determine whether they align with their own. When working in particular circumstances,
psychologists who do not believe in aversion therapy, corporal punishment, or electroconvulsive
therapy, for example, may come into conflict. As a result, psychologists should be aware of how
the information they provide to their referral source will be used. It's critical for them to
understand that they bear a large amount of responsibility, because the judgments they make
about their clients, which are typically based on assessment results, can be key turning points in
their lives. If the information could be utilized in a way that goes against the evaluator's values,
he or she should reconsider, clarify, or possibly change his or her relationship with the referral
setting.
All of these considerations are compatible with the emphasis on an evaluator doing
psychological assessment in the position of an expert therapist rather than a psychiatrist
functioning as a technician.
i) Interviews
When conducting clinical interviews, psychologists attempt to establish a comfortable
setting in which the client can discuss the difficulties that are bothering him or her. The
psychologist does not answer the phone, read text messages, or respond to emails during
the assessment interview, so there are no interruptions. Offices should ideally be
soundproofed to reduce distracting background noises. To put clients at ease, the
psychologist maintains a calm and relaxed demeanor. The clinical interview, on the other
hand, is not a social visit. It differs from conversations a client could have with friends,
the hairdresser, or a stranger on a long train journey in significant ways. Allowing
someone to simply share their tale differs from doing a clinical evaluation interview.
Empathic listening may be enough to bring brief respite to a troubled friend, but it isn't
enough to allow the psychologist to make a diagnosis or begin treatment planning.
Because an assessment interview is not an ordinary conversation, the client may feel
more at ease addressing painful or embarrassing topics than he or she might with friends.
The psychologist is in charge of organizing the session so that all pertinent subjects are
covered during the assessment interviews. The amount to which the psychologist overtly
controls the session, the way in which the questions are asked, and the topics discussed
are all determined by the psychologist's theoretical perspective and training.
Psychologists, on the other hand, are taught to ask questions in a way that encourages the
client to participate in the interview.
The distinction between open and closed questions is significant. Open questions require
more detailed responses from the client and cannot be addressed with a simple yes or no.
Closed questions, on the other hand, have a single response. Each has its own set of
benefits and drawbacks. Open inquiries allow the client to provide a more sophisticated
response without implying that a certain response is expected. Open questioning, on the
other hand, may lead to the client telling a long, tangential storey with no relevance, in
which case the psychologist must redirect the client back to the topic at hand. Closed
questions, on the other hand, provide brief, less confusing responses, allowing for quick
discussion of a wide range of topics. Exhibit 6.4 illustrates the difference between open
and closed inquiries. Many psychologists find it helpful to start a conversation with an
open-ended question and then follow up with closed inquiries that elucidate the
response's contents.
Although some individuals believe that asking questions about difficult topics like
suicide will make people more suicidal, this is not the case. Because the way a question is
phrased can influence the sort of response, psychologists avoid asking leading questions
or inserting words into the client's lips. When a client's first response to a topic is evasive
or ambiguous, the psychologist must encourage the client to expound or explain: Tell me
what you mean...; tell me more about it. In contrast to common conversational
conventions, the psychologist softly pursues a line of inquiry until the question is
answered. So, if Jessica changed the subject or made a joke when asked about her new
relationship, a friend may conclude that she didn't want to talk about it and go on to
something else, whereas a psychologist might inquire whether she noticed she was
having trouble talking about it. Because professional assessment interviews are not the
same as casual conversations, the psychologist may ask difficult-to-answer questions (for
example, what was it like for you when you had the miscarriage?). As you were forcing
yourself to vomit, what was going through your mind? When you tell someone you've
been diagnosed with schizophrenia, how do they react?). Clients may be stumped for a
response and must pause before responding. Psychologists employ silence to give clients
time to think and reflect, so they don't feel obligated to fill in the gaps in speech like they
might in a social setting.
Assessment interviews are not like typical chats, as we've emphasized countless times.
During assessment interviews, psychologists must be on the lookout for client issues.
Given the increased risk of suicide among persons suffering from a depressive disease, it
is normal to ask questions when screening a depressed client to estimate the likelihood of
a suicide attempt. Those inquiries must be based on current knowledge of the factors that
raise the risk of suicidal behavior. Suicidal thoughts, plans, and lethality, as well as
access to the means to attempt suicide, are all addressed by psychologists. Given the
substantial links between a history of suicidal behavior and the risk of future suicidality,
questions about suicide attempts must be included. Because some suicidal clients may
simply make a generic remark about their level of misery or hopelessness, the
psychologist must follow up with questions that measure the current danger. Exhibit 6.10
illustrates the types of inquiries psychologists ask when assessing the risk of suicide. If a
psychologist determines that a patient has a low risk of suicide, the psychologist should
make sure that the patient knows the phone numbers for a suicide helpline and a nearby
hospital. If the patient appears to be in danger, the psychologist may need to take them to
the nearest hospital's emergency room.
ii) Observations
The psychologist keeps a close eye on the client during the evaluation interview.
Important data can be collected by monitoring the client in addition to the answers to
inquiries. Although comments about the client's appearance and grooming used to be
common in clinical assessments, it is now only necessary to record remarkable traits that
are relevant to the evaluation. Some people find comments on a client's beauty
disrespectful because they are unrelated to the referral inquiry. The client's activity level,
attention span, and impulsivity are all noted by the psychologist. The client's speech is
carefully examined, with any difficulties or irregularities noted. Clients' bodily motions
and behaviors, as well as the ease with which they can be interacted with, are observed by
the psychologist.
Children with ADHD, for example, frequently respond effectively when given the full
attention of an unknown adult in unusual surroundings. As a result, a psychologist may
underestimate the severity of a kid's difficulties by believing that the child's behavior
during an intake interview is typical of how the child acts in general. Naturalistic
observations are used to gain information that is difficult to obtain in the office. It enables
the observation of behaviors that clients may not be able to express in interviews or
questionnaires because they are either ignorant of or uncomfortable with them. In a
familiar situation, home observations provide information about how the child and
parents behave. School observations provide information regarding the school
environment, teaching style, and a student's behavior in a classroom setting. Outside of
the clinic, permission is required to conduct observations. The parent (and, if the child is
deemed capable of consenting, the youngster) must provide their approval for the child to
be observed. Observations at school must also be approved by school personnel.
Naturalistic observations are timed to coincide with the most likely occurrence of the
issue behavior. The hours around supper, homework, and bedtime preparation are times
of conflict and struggle in many households with young children. School observations
may be organized to view the kid in both preferred and non-preferred activities, with
different teachers, during quiet study periods, and on the playground, depending on the
assessment issue.
The purpose of the observer is to be like the proverbial fly on the wall, noticing
everything while avoiding being noticed. Observers present themselves in a professional
yet unobtrusive manner. After a few brief greetings, the observer invites everyone to go
about their business as usual. The presence of a clipboard and a pen serves as a reminder
to both adults and children that this is not a typical social visit. Even though adults try
their hardest to make a good impression and act their best at first, children are
extraordinarily effective at convincing adults to act truly. Children make observations
about odd behaviors that adults may engage in in order to impress the observer,
Direct observation data is used to establish hypotheses about the child's functioning,
which are then tested against other assessment data. It would certainly be inappropriate to
make diagnostic choices simply on the basis of observations, or to assume that a child's
behavior was normal during the observation period. These observations only provide a
sliver of information on how the youngster interacts with significant others at home and
at school. Observational data, when joined with information from interviews, tests, and
other people's reports, can affirm or attenuate the developing picture of the child's
strengths and shortcomings.
People are impacted by their appearances, according to a significant corpus of
psychological studies (Garb, 1998). The idea that persons who wear glasses are cleverer
than those who do not is a well-known example. Hairstyles, grooming, posture, hand
gestures, and voice tone are among the other biases. Psychologists are not immune to
these biases, which can influence their decisions. When interacting with a new group, we
are especially prone to making mistakes. A greeting in certain parts of Canada may
consist of a barely visible nod of the head, yet in others, it may include a handshake,
kisses on both cheeks, and nose touching. According to ethical norms, psychologists must
become conscious of how their background affects their relationships with people and
their judgments of others' behavior.
The psychologists would have borrowed these systematic observation systems for use in
the clinic, given the tremendous insights we've obtained into parent–child and couple
relationships based on studies utilizing systematic observation of interactions. Despite the
fact that physicians rely heavily on observation in their assessments, they rarely employ
observational coding systems that have been standardized or proven to be reliable and
valid (Mash & Foster, 2001). Although formal diagnostic interviews produced in a
research context have been adjusted for use in clinical practice, observations have not
made the same transition from research to clinical practice. One big stumbling block is
the cost. To examine a single hour of conversation, some of the most valuable study
coding techniques need hours of coding. These charges are exceedingly difficult to justify
in a cost-conscious health-care system. As a result, academics are working on
observational methods that need less coding time. For example, the Disruptive Behavior
Diagnostic Observation Schedule (DB-DOS) is a brief observational approach that has
been demonstrated to aid in the identification of disruptive behavior problems and ADHD
in preschoolers (Bunte et al., 2013).
iii) Tests
Psychologists are experts in the creation and application of tests for the study and
treatment of human behavior. Although you can find quick tests of various concepts on
websites and magazines, creating a scientifically sound psychological test requires more
than just writing a few questions and coming up with a catchy name. The American
Educational Research Association [AERA], American Psychological Association, and
National Council on Measurement in Education (American Educational Research
Association [AERA], American Psychological Association, and National Council on
Measurement in Education, 2014) established principles for psychologists to follow when
developing and using tests and assessment procedures. A number of requirements must
be met for a psychological test to be useful in research or clinical practice, as you will see
in the following sections.
But first, let's define what a psychological test entails. Although defining a test appears to
be a simple operation, it is actually quite complicated. A test is defined as "an evaluative
device or procedure in which a sample of an examinee's behavior in a specified domain is
obtained and then evaluated and scored using a standardized process" in the Standards
(AERA et al., 2014). This description is cumbersome and may not be easily understood
by non-psychologists, despite the fact that it is broad enough to incorporate a variety of
testing procedures (including interviews, observation, and self-report). A test is defined
by Hunsley, Lee, Wood, and Taylor (2014) based on its intended usage.
(a) The clinician's intent is to collect a sample of behavior that will be used to generate
statements about a person, a person's experiences, or a person's psychological
functioning,
(b) the clinician claims or implies that the accuracy or validity of these statements is due
to the way the sample of behavior was collected and interpreted rather than the clinician's
expertise, authority, or special qualifications, then the process used is valid.
So, while you might be able to quickly create a questionnaire to evaluate some element of
human functioning, it isn't a test unless it has been shown to meet the standards of
reliability, validity, and norms.
What difference does the definition of a psychological test make? Although there are
various technical reasons for this, there is also a practical explanation that has far-
reaching real-world implications. Psychological tests are commonly employed in legal
and quasi-legal contexts, such as when a judge must decide on child custody or when a
tribunal must decide whether to grant a disabled worker a disability pension. Without
measures to guarantee that psychological tests follow scientific criteria, any series of
questions may be referred to as a test, and the findings could be presumed to be
scientifically correct and valid. All mental health practitioners do evaluations, however
psychologists have significantly more training in testing issues and are far more likely to
employ tests than other mental health experts. Whiteside, Sattler, Hathaway, and Douglas
(2016) conducted a survey of 339 clinicians from various fields who were delivering
mental health care to children with anxiety problems. Parent-report questionnaires, child-
report questionnaires, and structured diagnostic interviews were all asked about by
participants. Respondents were then divided into groups based on how frequently they
used EBA tools. Parent-report questionnaires and child-report questionnaires were used
more frequently by doctoral-level psychologists than by licenced counselors or master's-
level social workers; both groups used structured diagnostic interviews infrequently.
Standardization
Standardization is an essential aspect of a psychological test, and it denotes consistency in the
process used to administer and score the test across clinicians and testing days (Anastasi &
Urbina, 1997). It is nearly impossible for the psychologist, or any other psychologist, to
reproduce the information acquired in an examination without standardization. Furthermore,
without standardization, test findings are likely to be highly specific to the unique elements of
the testing environment and unlikely to yield data that can be extended to other psychologists'
tests, let alone other situations in the person's life.
Reliability
In both clinical and research settings, the topic of how trustworthy a test must be frequently
arises. Internal consistency refers to whether all aspects of the test contribute meaningfully to the
data obtained (internal consistency), whether similar results would be obtained if the person was
retested at some point after the initial test (test-retest reliability), and whether similar results
would be obtained if the test was conducted and/or scored by a different evaluator (test-retest
reliability) (inter-rater or inter-scorer reliability). If we want to generalize the test results and
their psychological implications beyond the present assessment context, we need reliable results.
Stimulus standardization, administration, and scoring are all prerequisites for good test
reliability, but they do not guarantee it. A test may have too many components that are
influenced by irrelevant client characteristics, the testing situation (such as demand
characteristics related to the testing purpose), or the assessing psychologist's behavior. It's also
possible that the test's scoring criteria are too difficult or lacking in detail to allow for accurate
scoring.
Validity
When we talk about test validity, we're talking about how much evidence there is that the test
actually measures what it claims to measure, as well as how the test findings are interpreted.
Because a test claiming to measure one construct may actually be measuring another or be
misconstrued, a standardized test that yields trustworthy results does not always yield valid data.
Test validity entails ensuring that the test includes items that are representative of all aspects of
the underlying psychological construct the test is designed to measure (evidence of content
validity), that the data provided is consistent with theoretical postulates associated with the
phenomenon being assessed (evidence of concurrent validity and evidence of predictive
validity), and that the test provides a relatively pure measure of the construct that is minimally
confounded (evidence of concurrent validity and evidence of predictive validity) (evidence of
discriminant validity).
Evidence of incremental validity, which is the extent to which a measure adds to the prediction
of a criterion over what can be anticipated by other sources of data, should be addressed in
practical contexts, such as in clinical evaluation (Hunsley& Meyer, 2003; Sechrest, 1963).
Norms
It is critical to employ either norms or particular criterion-related cut-off scores to interpret the
findings collected from a client in a relevant way (AERA et al., 2014). It is impossible to
determine the specific meaning of any test results without such reference material. So, if you
were given a 44 on an emotional maturity test, you wouldn't know what it meant unless you
knew the range of possible scores and how most other people scored. In psychological testing,
comparisons must be made to either test-specific criteria (e.g., a certain level of accuracy
indicated in the exam is required for good job performance) or to some sort of norms.
Norms are established by test developers for the most part in clinical psychology. Most crucially,
choices must be made on which populations will be subjected to the test. It is possible to develop
criteria for comparing a given score to those obtained in the overall population or certain
subgroups of the general population (e.g., gender-specific norms). So, if your emotional maturity
score of 44 was significantly greater than the overall population's average, you might be rather
satisfied. It is also possible to develop rules for determining the possibility of belonging to
specific theoretical or real groups (e.g., non-distressed versus psychologically disordered
groups). It may be required to design numerous norms for a test based on the group being tested
and the testing goal, much as it is with validity issues (i.e., norms relevant for different ages and
ethnic groups).
iv) Case studies
Case studies, like case studies in medicine, have a long and illustrious history in clinical
psychology. The professional community has been enriched by descriptions of
uncommon presenting conditions or novel therapies.
A typical case study entails a detailed description of a single patient, couple, or family
that demonstrates a novel or unusual observation or therapeutic innovation. Case studies
are an effective way to make preliminary linkages between events, behaviors, and
symptoms that haven't been covered in previous studies. Case studies can provide a
wealth of research theories for causation and maintenance of illnesses. They can also
serve as the first test bed for new assessment or intervention procedures. Case studies
offer heuristic value, which means they call other professionals' attention to a situation.
Case studies have scientific importance since they can create ideas, but they don't allow
for thorough testing of theories. The most serious flaw in the case study technique is that
most concerns to internal validity go unaddressed (Kazdin, 1981). Consider the treatment
of Zach's homework-related temper tantrums as an example. Typically, the author of a
case study describes the client's symptoms or issues before and after treatment (such as
the number of tantrums and their intensity). Alternative explanations cannot be ruled out
in this modest research design, even if the author would prefer to claim that any
improvement was attributable to treatment effects. Normal developmental changes (i.e.,
maturation—the simple effects of Zach growing older or having no homework during the
holidays), the abating of symptoms that typically occurs over time (i.e., regression to the
mean), or life events outside of therapy (i.e., history effects, such as getting a new
teacher) could all account for the observed changes.
B. Processing data and drawing Conclusions
Following the collection of assessment data, the doctor must establish what those findings mean.
If the data is to be valuable in helping the clinician achieve his or her assessment goals, it must
be changed from its raw state into interpretations and conclusions that address a referral
question. The processing challenge is daunting because it necessitates a mental jump from
known data to what is thought to be true based on those data. In general, inference becomes more
sensitive to error as the distance between data and assumption grows longer.
Consider the following: On a lawn, a young boy is chopping an earthworm in half. It would be
easy to conclude from this observational data that the child is nasty and violent, and that he could
grow up to be dangerous. These deductions, however, would be incorrect, because "what the
observer couldn't see was what the child, who happened to have few friends, thought as he cut
the worm in half: 'There!' You'll have someone to play with now' (Goldfried&Sprafkin, 1974, p.
305). In short, complex inference can be risky, especially when it is based on limited data.
It's also challenging to process assessment data because information from diverse sources must
be combined. Unfortunately, there are few empirical standards for combining data from
interviews, tests, observations, and other sources to arrive at comprehensive conclusions. As a
result, clinicians must frequently rely on clinical judgement to get their conclusions.
Purpose
In therapeutic settings, psychological assessment serves three main functions. The initial goal is
to identify, operationalize, and assess a client's adaptive and maladaptive behaviors as well as
therapy objectives. A second goal is to identify, operationalize, and measure elements that
influence a client's adaptive and maladaptive behaviors, as well as their ability to meet treatment
objectives. A third goal is to combine assessment data so that interventions to improve a client's
quality of life can be designed and evaluated. Consider the difficult evaluation challenges that a
psychologist faces while working with a client who is experiencing severe fear, social isolation,
and frequent disagreements with a partner. First, the physician must choose an evaluation
technique that will collect and analyze these many issues throughout the intervention. In order to
understand why the client is feeling isolation, panic episodes, and conflict, the evaluation
technique must also allow the clinician to uncover crucial causal relationships connected with
these difficulties. Finally, the assessment approach and data must be integrated before being
utilized to create an intervention that will alter causal relationships in order to reduce panic
attacks, isolation, and conflict while simultaneously increasing adaptive behaviors. The clinical
and scientific applications of psychological assessment are exemplified by the aforementioned
assessment goals—the systematic measurement of a person's behavior, factors associated with
variance in behavior, and inferences and judgments based on those measures (see multiple
definitions of psychological assessment in Geisinger, 2013; Haynes, Smith, &Hunsley, 2019).
We refer to overt behaviors, emotions, cognitive processes, and physiological responses as
"behavior." Behavioral, environmental, social, and biological variables are also included in the
phrase "variables."
Fundamental assumptions, applicability, utility, and preferred assessment procedures differ
among psychological assessment paradigms. The assumptions, ideas, values, hypotheses, and
procedures approved within a psychological assessment field are referred to as a psychological
assessment paradigm. 1 At least partially explanatory are all psychological assessment models.
In other words, they're made to figure out why people behave the way they do. Some
psychodynamic frameworks, for example, may assume that the aforementioned client's terror,
social isolation, and conflict are mostly caused by historical, developmental, unconscious, and
intrapsychic processes. The client's verbal reports of perceptions when asked to see ambiguous
stimuli, such as a Rorschach or Thematic Apperception Test, are thought to be the best way to
identify these reasons in this paradigm. Some personality-based paradigms presume that a
client's issues are caused by temporally and situationally consistent patterns of cognitive,
emotional, and behavioral dispositions that can be recognized by a self-report symptom
inventory.
When it comes to psychological reports, there are a variety of models and ways to consider. The
following sections go through some of the different types of report models.
• The Test Oriented Model,
• The Domain Oriented Model, and
• The Hypothesis Oriented Model is the three models for psychological reports that will
be described.
Results are reviewed on a test-by-test basis in the Test Oriented Model. Each test is listed by
name, along with the most important results for that test. In general, each test is covered in its
own paragraph. There is little or no effort (at least not in the "Results of Assessment" section) to
compare and contrast data between the various exams. The strength of this method is that it
clearly identifies the source of each piece of information. In some cases, such as forensic reports,
this could be critical. The reader's attention is drawn to the tests rather than the client's adaptive
functioning, which is a flaw in this approach.
It also conveys to the reader that psychological evaluation is a low-level, technical skill that
entails little more than administering a test and copying interpretive remarks from a manual. It
ignores the psychologist's position as a test data integrator; a professional who brings to bear his
understanding of how the test was developed, normed, the limitations of test data
generalizability, and how to use the data in a theoretical/conceptual manner to better understand
the client. In the past, the Test Oriented Model was widely utilized, but it has grown increasingly
unpopular in recent years.
The Domain Oriented Model
This model categorizes results into abilities or "functional domains." Intellectual capacity,
interpersonal skills, psychosocial pressures, coping mechanisms, intrapersonal requirements,
motivational variables, depression, psychotic traits, and other themes usually get their own
paragraph. When there isn't a precise referral question and you're not sure how your data will be
used, this approach comes in handy. A recently hospitalized patient, for example, may have
limited background information. You have no idea why he was admitted or what circumstances
led to his admittance. As a result, determining which parts of your data will be beneficial to the
treatment team might be difficult. In neuropsychological reports, where a number of providers
may potentially become engaged in the case, the Domain Oriented Model is also used. Each
provider will concentrate on a different section of the report in order to assist with a specific
aspect of the intervention. When using evaluation to track therapy success, this strategy is also
beneficial. It enables you to keep track of changes in the client's functioning in a variety of areas'.
The reader may be given with a lot of information that has little relation to his intended
intervention, which is a shortcoming of the Domain Oriented method.
Hypothesis Testing Model
The outcomes of the Hypothesis Testing Model are concentrated on possible replies to the
referral question (s). In the "Purpose for Evaluation" section, present a hypothesis, then present
facts systematically to support or disprove the theory. Separate paragraphs in the "Results of
Evaluation" section integrate data from the history, mental status assessment, and behavioral
observations with data from all the tests to address theoretical/conceptual difficulties. Tests are
rarely referred to by their full names. For instance, data from the MMPI-2 scale 2 might be
paired with interpretive data from the MCMI dysthymia scale. It is included in a paragraph about
depression if the integration of this information is consistent with the history and mental state
evaluation. The efficiency and concise focus on the referral problem are the model's strengths.
Unrelated details do not distract the reader. The model's main flaw is that you don't submit
information that isn't linked to the "objective of the evaluation" but could be useful to other
disciplines.
A psychological report can be organized in a variety of ways. Some practitioners like to write
letters in an informal, unstructured fashion. When the report will be seen by a single referral
source and the referring person is known to the practitioner, this is very acceptable. Other reports
may be better organized around more structured headers (for example, 'Referral question,' 'Test
findings,' and 'Summary and recommendations'). Some reports may require (and practitioners
prefer to include) a detailed history, but others may want to spend more time focusing on
perceptions and interpretations. Given recent trends toward treatment planning and
demonstrating the practical, everyday relevance of assessment, some reports may devote more
time and effort to providing concrete, specific recommendations for psychotherapy, vocational
training, educational intervention, or neuropsychological rehabilitation.
Even if reports do not formally identify certain headings and subheadings, they usually contain a
consistent set of subject sections. The following is a list of common places (adapted from
GrothMamat, 1999; Williams & Boll, 2(00)):
Name:
Age (birth date):
Sex:
Ethnicity:
Report publication date:
Examiner's name:
Referred by:
Question of Referral:
Evaluation Procedures:
Observations on behavior:
Test results:
Impressions and interpretations
Background information:
Test results:
Impressions and interpretations:
Conclusions and suggestions:
A top-of-report notification that the report is 'Confidential' is an added feature. The author's
signature, name, and title should appear at the end of the report. This is critical because it
signifies the author's formal acceptance of responsibility for the report's contents. The identifying
information (name, age, sex, etc.) is rather straightforward, but the extra features (I-VII) require
more explanation to ask the referral source what judgments they need to make about the client.
In some cases, discussing the types of questions that can and cannot realistically be answered
through formal assessment with the referral source will entail stating the types of questions that
can and cannot realistically be answered through formal assessment.
Such conversations may even lead to a consensus that formal assessment is not necessary in this
circumstance. A clearly articulated referral question will carry through to the rest of the report in
that it will provide a frame of reference for this material as well as a rationale for what should be
included in the sections on background information (history), impressions / interpretation, and,
most importantly, the summary / recommendations.
Creating bulleted points in the summary, each of which provides a clear answer to each of the
referral questions, is one useful strategy. The points must, however, be compatible with the
content offered in the impressions / interpretation section. Making a brief, succinct, orienting
statement about the client (e.g., 'Mr. X is a 36-year-old, white, right-handed, married male with a
high school education who sustained a severe, diffuse closed head injury on April 12, 1998') is a
nice way to start the referral question section (and the report in general).
Common errors
a. Validity Mismatch
Some tests are beneficial in a variety of scenarios, but no single test is suitable for all jobs
with all people in all circumstances. Gordon Paul's seminal 1967 paper shifted our focus
away from the oversimplified search for effective therapies and toward a more
challenging but important question: "What treatment, by whom, is most beneficial for
this individual with that unique illness, and under what set of circumstances?"
"Has research established sufficient reliability and validity (as well as sensitivity,
specificity, and other relevant features) for this test, with an individual from this
population, for this task (i.e., the purpose of the assessment), in this set of
circumstances?" is a question that arises when selecting assessment instruments. It's
worth noting that when the population, task, or conditions change, so will the
measurements of validity, reliability, sensitivity, and so on.
To establish if tests are well-suited to the task, individual, and situation at hand, the
psychologist must first pose a fundamental question: Why am I conducting this
assessment in the first place?
b. Bias Against Confirmation
Information that is congruent with our attitudes, beliefs, and expectations is frequently
sought, recognized, and valued. If we establish an initial impression, we may favor
results that confirm it, while discounting, ignoring, or misinterpreting data that
contradicts it.
The logical mistake of fast generalization is analogous to this premature cognitive commitment
to an initial impression, which might build a powerful cognitive set through which we sift all
subsequent data.
To guard against confirmation bias (the tendency to choose information that confirms our
assumptions), it's a good idea to actively seek out data that contradicts our expectations and to
experiment with different interpretations of the facts.
Standardization can be thwarted in other ways as well. People may arrive for an
assessment session without adequate reading glasses, or after taking cold medication that
makes them drowsy, or after experiencing a family emergency or loss that makes them
unable to concentrate, or after staying up all night with a loved one and now can barely
keep their eyes open. These situational aspects must be recognized by the expert doing
the assessment, as well as how they can jeopardize the validity of the assessment and how
to successfully resolve them.
Any of us who do assessments can be affected by these same situational elements and
find ourselves unable to function effectively on any given day. We can also fall short due
to a lack of knowledge. It is critical to conduct only those examinations for which enough
education, training, and supervised experience have been provided. We may do
admirably in one field (e.g., counselling psychology, clinical psychology, sport
psychology, organizational psychology, school psychology, or forensic psychology) and
mistakenly believe that our skills will simply transfer to other areas. It is our job to
acknowledge our own boundaries of expertise and to ensure that any evaluation is based
on proper knowledge of the relevant areas of practice, issues, and instruments.
e) Ignoring the Consequences of Low Interest Rates
Ignoring base rates can contribute to a variety of testing issues, but very low base rates
appear to be particularly problematic. Assume you've been tasked with creating an
assessment procedure that will detect corrupt judges so that judicial applicants can be
vetted. It's a challenging task, in part because just one judge out of 500 is (hypothetically)
dishonest. You compile all of the actuarial data you can find and discover that you can
create a crookedness screening test based on a variety of variables, personal history, and
test results. 90 percent of the time, your method is correct.
When your method is used to screen the next 5,000 judicial applicants, it's possible that
ten of them will be corrupt (because about 1 out of 500 is crooked). Nine of the ten
crooked candidates will be identified as crooked, while one will be identified as honest,
according to a 90 percent accurate screening process.
So far, everything has gone well. The issue is the 4,990 truthful candidates. Because the
screening is 10% wrong, and the only way for the screening to be wrong about honest
candidates is to label them as crooked, 10% of the honest candidates will be mistakenly
classified as crooked. As a result, 499 of the 4,990 honest candidates will be wrongly
classified as crooks using this screening procedure. So, out of 5,000 individuals
evaluated, the 90 percent accurate test identified 508 of them as crooked (i.e., 9 who
actually were crooked and 499 who were honest). Only 9 out of every 508 times the
screening procedure detects crookedness is correct. It has also mistakenly labelled 499
honest people as corrupt.
f) Dual High Base Rates Are Misunderstood
You are flown in as part of a disaster response team to work at a community mental
health facility in a city that has been ravaged by a catastrophic earthquake. Looking over
the center's records, you'll notice that out of the 200 people who have come for services
since the earthquake, 162 are of a particular religious faith and have been diagnosed with
PTSD related to the earthquake, and 18 are of that faith who have come for services
unrelated to the earthquake. Among individuals who are not of that faith, 18 have been
diagnosed with post-traumatic stress disorder (PTSD) as a result of the earthquake, while
two have arrived for services unrelated to the earthquake.
It seems self-evident that there is a substantial link between that religious faith and the
development of PTSD as a result of the earthquake: 81 percent of those who came for
services were of that religious faith and had PTSD. Perhaps those with this faith are more
prone to PTSD. Perhaps there's a more subtle link: this religion may make it simpler for
those with PTSD to seek mental health treatment.
However, inferring a link is a fallacy: religious beliefs and the onset of PTSD in this
society are both distinct causes. Ninety percent of all people seeking services at this
centre are of that religious faith (i.e., 90% of those who had developed PTSD and 90% of
those who had come for other reasons), and 90% of all people seeking services after the
earthquake have developed PTSD (i.e., 90% of those with that religious faith and 90% of
those who are not of that faith). Both factors have high base rates, so they appear to be
linked, yet they are statistically unrelated.
g) The Fallacy of Perfect Conditions
We want to assume that "everything is great," that "conditions are perfect," especially
when we're rushed. If we don't check, we might not find out that the person we're
interviewing for a job, a custody hearing, a disability claim, a criminal case, asylum
status, or a competency hearing took standardized psychological tests and completed
other phases of formal assessment under conditions that skewed the results significantly.
For example, the person may have forgotten their reading glasses, be suffering from a
severe headache or illness, be using a hearing aid that isn't working properly, be taking
medication that impairs cognition or perception, have forgotten to take needed
psychotropic medication, have had a crisis that makes it difficult to concentrate, be in
physical pain, or have difficulty understanding the language used in the assessment.
h) Financial Discrimination
Assuming that we are immune to the impacts of financial prejudice is a very human
assumption. A financial conflict of interest, on the other hand, can have a subtle – and
sometimes not so subtle – impact on how we obtain, evaluate, and present even the most
regular facts. This notion is represented in well-established forensic texts and formal
standards that prohibit liens and other forms of contingent payment based on the outcome
of a case. "Forensic psychologists do not provide professional services to parties to a
legal proceeding on the basis of 'contingent fees,' when those services involve the
offering of expert testimony to a court or administrative body, or when they call upon the
psychologist to make affirmations or representations intended to be relied upon by third
parties," according to the Specialty Guidelines for Forensic Psychologists.
Ignoring the effects of audio or video recording, as well as the presence of third-party
observers. People’s answers (e.g., various elements of cognitive function) during
psychological and neuropsychological testing can be influenced by audio-recording,
video-recording, or the presence of third parties, according to empirical studies. Ignoring
these potential consequences can lead to a very erroneous conclusion. Reviewing relevant
research and professional norms is an important component of adequately preparing for
an evaluation that will entail recording or the participation of third-parties.
2.6 SUMMARY
The evaluation of insanity and the evaluation of competency are two potentially
challenging topics.
The question of whether the defendant is competent to stand trial is related to insanity.
In many ways, observing the child in a larger setting runs counter to the history of
individual assessment.
Furthermore, a psychologist may query if the treatment available at a psychological clinic
is appropriate for particular groups of self-referred clients.
So far, the focus of this discussion on the many circumstances in which psychological
testing is utilised has been on when to test and how to emphasise how tests can be most
useful in making decisions.
The referral source is the person or organisation that requests the psychological
evaluation and the referral question is the topic or issue that will be addressed during the
evaluation.
The distinction between open and closed questions is significant.
During assessment interviews, psychologists must be on the lookout for client issues.
A number of requirements must be met for a psychological test to be useful in research or
clinical practice, as you will see in the following sections.
But first, let's define what a psychological test entails.
In order to assist the client, the counsellor must possess the appropriate skills.
The client's subjective condition is also determined by the referral source.
When a counsellor respects the client's ethnic, linguistic, and cultural distinctions, the
client feels valued and understood, and trust and confidence in the counsellor grows.
The interpretation of test results is also highly important.
The whole-person approach refers to the technique of using a variety of tests and
processes to more thoroughly analyse persons.
A clearly articulated referral question will carry through to the rest of the report in that it
will provide a frame of reference for this material as well as a rationale for what should
be included in the sections on background information (history), impressions /
interpretation, and, most importantly, the summary / recommendations.
However, inferring a link is a fallacy: religious beliefs and the onset of PTSD in this
society are both distinct causes.
Both factors have high base rates, so they appear to be linked, yet they are statistically
unrelated.
A financial conflict of interest, on the other hand, can have a subtle – and sometimes not
so subtle – impact on how we obtain, evaluate, and present even the most regular facts.
Ignoring the effects of audio or video recording, as well as the presence of third-party
observers People’s answers (e.g., various elements of cognitive function) during
psychological and neuropsychological testing can be influenced by audio-recording,
video-recording, or the presence of third parties, according to empirical studies.
2.7KEYWORD
Standardisation-the process of making something conform to a standard.
Hypothesis testing–a form of statistical inference that uses data from a sample to draw
methods, validity, and scope, and the distinction between justified belief and opinion.
2.8 LEARNING ACTIVITY
1. Why is it important to have referral for Clinical Assessment
______________________________________________________________________________
________________________________________________________________________
2. Explain how internal and external factors can affect clinical assessment.
______________________________________________________________________________
________________________________________________________________________
Long Questions:
1. Explain with examples the need for and the process of qualitative analysis in clinical
assessment.
2. Describe the major component of assessment.
3. Why is observation important for clinical data gathering? How is the process of
observation utilised in clinical data gathering?
4. How to analyse the qualitative data in clinical assessment?
5. Explain the importance of case study in clinical assessment.
Answers
1-a, 2-b, 3-c, 4-a, 5-b
2.10REFERENCES
References book
Aiken (2009). Psychological Testing and Assessment. Pearson.
J F Ter Laak (2013). Understanding Psychological assessment: A Primer on Global
Assessment of the Client’s Behaviour in Educational and Organisational Setting. Sage
India
S K Mangal (1996).Abnormal Psychology. Sterling Publishers Pvt. Ltd.
Korchin (2004). Modern Clinical Psychology. CBS.
Textbook references
Robert Kaplan and Dennis P. Saccuzzo (2013). Psychological Assessment and Theory:
Creating and Using Psychological Test. Cengage
Anne Anastasy (1982), Psychological Testing. Macmillan Publishing Co. INC.
Kaplan and Sadock. (8th Ed.), Synopsis of Psychiatry. B.I. Waverly Pvt. Ltd.
Janet R. Matthews and Barry S. Anton (2007), Introduction to Clinical Psychology.
Oxford University Press.
Website
https://psychology.fandom.com/wiki/Introduction_to_clinical_psychology
https://egyankosh.ac.in/bitstream/123456789/50987/3/Unit-3.pdf
https://opentext.wsu.edu/abnormal-psych/chapter/module-3-clinical-assessment-
diagnosis-and-treatment/
https://www.apa.org/topics/testing-assessment-measurement/understanding