Nothing Special   »   [go: up one dir, main page]

Research Instrument, Validation and Test of Reliability

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Department of Education

Region III
DIVISION OF MABALACAT CITY

Name: __________________________________ Grade & Strand: _____________


School: _________________________________ Date: _______________________

LEARNING ACTIVITY SHEET


Applied Subject 12 PRACTICAL RESEARCH 2 (Q2-WK3)
Research Instrument, Validation and Test of
Reliability

I. Introduction

Hi fellow researcher! Good job on your past output! You are doing good!
We will now proceed on the next stage of your research paper- the
instrument. This topic will let you understand on what a research
instrument is, how to do it and how to evaluate if it is a good instrument
or not. Enjoy learning!

II. Learning Competency

This Learning Activity Sheets was designed and written with you in mind.
It is here to help you construct your own research instrument.

III. Objectives

After going through this Learning Activity Sheets, you are expected to:

1. Draft your own research instrument; and


2. Test the reliability and validity of the instrument

IV. Discussion

After a careful formulation of research questions/hypothesis and


sample selection next step in research chain is developing data collection
instrument.

1
The most used research instrument in quantitative research studies
is Questionnaire.

A questionnaire is a research instrument consisting of a series of


questions for the purpose of gathering information from respondents.
Questionnaires can be thought of as a kind of written interview. They can
be carried out face to face, by telephone, computer, or post (McLeod, 2018).

A questionnaire is a research instrument consisting of a series of


questions and other prompts for the purpose of gathering information from
respondents. Questionnaire is widely used esp. In descriptive survey
studies (Borg & Gall, 1983).

Advantages of questionnaires
There are several advantages of using questionnaires in research as follows:

 Easy to conduct and surely, large amounts of information can be


obtained from many respondents. Questionnaires are also cost-
effective when the researchers aim to target a large population.
 Broad coverage. Local, national, and international respondents can
be easily reached by questionnaires. The Internet and particularly,
social media have made it easy to use questionnaires to reach out to
respondents afar.
 Responses received are frank and anonymous. Unlike interviews,
questionnaires are good for sensitive & ego-related questions.
 Carrying out research with questionnaires is less time consuming
and respondents can fill in questionnaires at a convenient time as
well.
 Questionnaires provide the researchers with quantitative data.
Quantitative information can be used to prove or disprove existing
hypotheses. The results of the questionnaires can also be easily
quantified by researchers either manually or through the use of
software packages such as SPSS.

Disadvantages of questionnaires
There are several disadvantages of using questionnaires in research as
follows:

 No clarification for ambiguous questions. Many experts argue that


questionnaires are inadequate to understand human behavior,
attitude, feelings etc.
 Inadequate motivation to respond. Unattractive style and format of
questionnaires may also put some respondents off.

2
 Some questions may be poorly worded, while some others may be
very direct. These questions are not useful to obtain good
information. Many researchers also argue that questionnaires lack
validity as they yield information without explanation.
 Low response rate as questionnaires may not simply be suitable for
some respondents. Likewise, if the researchers decide to use a postal
questionnaire, many people may decline to respond.
 Many questions may be interpreted by respondents in ways the
researchers did not intend resulting in irrelevant information.
Likewise, it is also difficult for researcher to say how truthful the
respondents were.

Preliminary decisions in questionnaire design


(fao.org)

There are nine steps involved in the development of a questionnaire:

1. Decide the information required.


2. Define the target respondents.
3. Choose the method(s) of reaching your target respondents.
4. Decide on question content.
5. Develop the question wording.
6. Put questions into a meaningful order and format.
7. Check the length of the questionnaire.
8. Pre-test the questionnaire.
9. Develop the final survey form.

General Tips for Designing Questions


(Barkman, 2002)

• Ask demographics questions first - this gets the audience engaged in the
instrument.
• The first questions should be easy, avoiding controversial topics.
• Make sure questions are related to achievement of the targeted
outcome(s).
• Group like questions together – knowledge, attitude, skills, behavior, or
practice.
• Keep questions on one subject grouped together.
• Make your questions simple, but do not talk down to your audience.
• Make sure questions have only one thought. To make sure you are only
asking one question, do not include the word “and” in your questions.
(i.e., "How would you rate your financial management knowledge and
skills?" - The participants may want to rate their knowledge and skills
differently.)

3
• Avoid questions with the word “not” in them.
• Don’t quote a question directly from the written curriculum.
• Avoid trick questions.
• Make sure the questions are reasonable and do not invade the
respondent’s privacy.
• Avoid asking questions that are too precise – such as “how many times
did you eat out last month" - use a range instead.
• Avoid using technical jargon or acronyms.
• Remember the ethnic backgrounds of your respondents. Some words
have different meanings to different groups.
• Remember the literacy level of your group – you can check the reading
level of your instrument in MS Word.

Writing Survey Questionnaire Items


(Chiang, Jhangiani, & Price, 2015)

Types of Items

Questionnaire items can be either open-ended or closed-ended. Open-


ended items simply ask a question and allow participants to answer in
whatever way they choose. The following are examples of open-ended
questionnaire items.

 “What is the most important thing to teach children to prepare them


for life?”
 “Please describe a time when you were discriminated against
because of your age.”
 “Is there anything else you would like to tell us about?”

Closed-ended items ask a question and provide a set of response options


for participants to choose from. The alcohol item just mentioned is an
example, as are the following:

How old are you?

 _____ Under 18
 _____ 18 to 34
 _____ 35 to 49
 _____ 50 to 70
 _____ Over 70

On a scale of 0 (no pain at all) to 10 (worst pain ever experienced), how


much pain are you in right now?

4
Have you ever in your adult life been depressed for a period of 2 weeks or
more?

Dichotomous question: this is a question that will generally be a “yes/no”


question but may also be an “agree/disagree” question. It is the quickest
and simplest question to analyze but is not a highly sensitive measure.

Multiple choice questions: these questions consist of three or more


mutually exclusive categories and ask for a single answer or several
answers. Multiple choice questions allow for easy analysis of results, but
may not give the respondent the answer they want.

Rank-order (or ordinal) scale questions: this type of question asks your
respondent to rank items or choose items in a particular order from a set.
For example, it might ask your respondents to order five things from least
to most important. These types of questions force discrimination among
alternatives, but does not address the issue of why the respondent made
these discrimination.

Rating scale questions: these questions allow the respondent to assess a


particular issue based on a given dimension. You can provide a scale that
gives an equal number of positive and negative choices, for example,
ranging from “strongly agree” to “strongly disagree.” These questions are
very flexible, but also do not answer the question “why.”

What is a Likert Scale?


In reading about psychological research, you are likely to encounter
the term Likert scale. Although this term is sometimes used to refer to
almost any rating scale (e.g., a 0-to-10 life satisfaction scale), it has a much
more precise meaning.

In the 1930s, researcher Rensis Likert (pronounced LICK-ert) created a


new approach for measuring people’s attitudes (Likert, 1932). It involves
presenting people with several statements—including both favorable and
unfavorable statements—about some person, group, or idea. Respondents
then express their agreement or disagreement with each statement on a 5-
point scale: Strongly Agree, Agree, Neither Agree nor Disagree, Disagree,
Strongly Disagree. Numbers are assigned to each response (with reverse
coding as necessary) and then summed across all items to produce a score
representing the attitude toward the person, group, or idea. The entire set
of items came to be called a Likert scale.

Thus, unless you are measuring people’s attitude toward something by


assessing their level of agreement with several statements about it, it is
best to avoid calling it a Likert scale. You are probably just using a “rating
scale.”

5
Reliability and Validity

To have a good instrument which can answer your statement of the


problems, reliability and validity is a must. Sometimes, there are
questionnaires which can be adopted from another research similar to
your topic. What you need to do is to look for its Cronbach’s Alpha which
is a measure of internal consistency, that is, how closely related a set of
items are as a group. It is a measure of scale reliability.
(stats.idre.ucla.edu).

(Note that a reliability coefficient of .70 or higher is considered


“acceptable” in most social science research situations.)

For ethical considerations, a letter of intent to the author (owner of


the questionnaire) must be made.

Reliability and validity are important aspects of selecting a survey


instrument. Reliability refers to the extent that the instrument yields the
same results over multiple trials. Validity refers to the extent that the
instrument measures what it was designed to measure
(statisticssolution.com)

Assessing Questionnaire Reliability and Validity


(Morrison, 2019)

Reliability

Reliability is the extent to which an instrument would give the same


results if the measurement were to be taken again under the same
conditions: its consistency.

How do we assess reliability?

One estimate of reliability is test-retest reliability. This involves


administering the survey with a group of respondents and repeating the
survey with the same group at a later point in time. We then compare
the responses at the two time points.

For categorical variables we can cross-tabulate and determine the


percentage of agreement between the test and retest results, or
calculate Cohen’s kappa1.

6
For continuous variables, or where individual questions are combined
to construct a score on a scale, we can compare the values at the two
time points with a correlation.

One immediately obvious drawback of test-retest reliability is memory


effects. The test and the retest are not happening under the same
conditions. If people respond to the survey questions the second time in
the same way they remember responding the first time, this will give an
artificially good impression of reliability. Increasing the time between
test and retest (to reduce the memory effects) introduces the prospect of
genuine changes over time.

If the survey is to be used to make judgements or observations of


another subject, for example clinicians assessing patients with pain or
mental health issues, or teachers rating different aspects of children’s
writing, we can compare different raters’ responses for the same
subject; inter-rater reliability. Here we would use the same statistics
as for test-retest reliability. As with test-retest reliability the two
measurements are again not taken under the same conditions, the
raters are different; one may be systematically “harsher” than the other.

Parallel-form reliability involves developing two equivalent,


parallel forms of the survey; form A and form B say, both measuring the
same underlying construct, but with different questions in each.
Respondents are asked to complete both surveys; some taking form A
followed by form B, others taking form B first then form A. As the
questions differ in each survey, the questions within each are combined
to form separate scales. Based on the assumption that the parallel forms
are indeed interchangeable, the correlation of the scale scores across

7
the two forms is an estimate of their reliability. The disadvantage of this
is that it is expensive; potentially double the cost of developing one
survey.

An alternative is split-half reliability. Here we divide the survey


arbitrarily into two halves (odd and even question numbers, for example),
and calculate the correlation of the scores on the scales from the two
halves. Reliability is also a function of the number of questions in the
scale, and we have effectively halved the number of questions. So we
adjust the calculated correlation to estimate the reliability of a scale that
is twice the length, using the Spearman Brown formula.

Split-half reliability is an estimate of reliability known as internal


consistency; it measures the extent to which the questions in the survey
all measure the same underlying construct. Cronbach’s alpha is
another measure of internal consistency reliability. For surveys or
assessments with an even number of questions Cronbach’s alpha is the
equivalent of the average reliability across all possible combinations of
split-halves. Most analysis software will also routinely calculate, for
each question or questionnaire item in the scale, the value of Cronbach’s
alpha if that questionnaire item was deleted. These values can be
examined to judge whether the reliability of the scale can be improved
by removing any of the questionnaire items as demonstrated in the
example below.

The scale that is constructed from these 6 questionnaire items has


a Cronbach’s alpha of 0.866. The 4th questionnaire item (Q4) has the
weakest correlation with the other items, and removing this
questionnaire item from the scale would improve the reliability,
increasing Cronbach’s alpha to 0.893.

With the use of Microsoft Excel and online reference


(https://www.socscistatistics.com/tests/) we can compute the reliability
and validity of an instrument.

8
V. Activity

Activity # 1:
Create at least 20 survey questions on your chosen topic.

VI. Assessment

Pilot Testing
Pretesting

Once you have finished designing your survey questionnaire, find 5-


10 people from your target group to pretest it. If you can’t get people from
your exact target group, then find people who are as close as possible.

Try to get a range of different people who are representative of your


target group. For example, if your target group is young people aged 15-
25, try to include some who are younger, some who are older, boys and
girls with different socioeconomic backgrounds.

Ask them to complete the survey while thinking out loud.

Once you have found your testers, ask them to complete the survey
one at a time (they shouldn’t be able to watch each other complete it). The
testers should complete the survey the same way that it will be completed
in the actual project. So, if it’s an online survey they should complete it
online, if it’s a verbal survey you should have a trained interviewer ask
them the questions.

While they are completing the survey ask them to think out loud.
Each time they read and answer a question they should tell you exactly
what comes into their mind. Take notes on everything they say.

9
Observe how they complete the survey.

You should also observe them completing the survey. Look for places where
they hesitate or make mistakes, such as the example below. This is an
indication that the survey questions and layout are not clear enough and
need to be improved. Keep notes on what you observe.

Make improvements based on the results.

Once all the testers have completed the survey review your notes from each
session. At this point it is normally clear what the major problems are so
you can go about improving the survey to address those problems.
Normally this is all that is needed. However, if major changes are needed
to the questions or structure it might be necessary to repeat the pretesting
exercise with different people before starting the survey.

Requirement: Submit the unedited questionnaire and the edited


questionnaire based on the pretesting.

10
VII. References

Barkman, S. (2002). A Filed Guide to Designing Quantitative Instruments to Measure


Program Impact. Retrieved from
https://ag.purdue.edu/extension/pdehs/Documents/QuantitativeFieldGuide.pdf

Borg, W.R. & Gall, M .D. (4th Ed.). (1983). Educational research: an introduction.
NewYork: Longman Inc.

Chiang, I., Jhangiani, R., & Price, P. (2015, October 13). Constructing Survey
Questionnaires. Retrieved November 04, 2020, from
https://opentextbc.ca/researchmethods/chapter/constructing-survey-questionnaires/

Chiang, I., Jhangiani, R., & Price, P. (2015, October 13). Constructing Survey
Questionnaires. Retrieved November 04, 2020, from
https://opentextbc.ca/researchmethods/chapter/constructing-survey-questionnaires/

HOME. (n.d.). Retrieved November 04, 2020, from


https://stats.idre.ucla.edu/spss/faq/what-does-cronbachs-alpha-mean/

Krosnick, J.A. & Berent, M.K. (1993). Comparisons of party identification and policy
preferences: The impact of survey question format. American Journal of Political
Science, 27(3), 941-964.

Mcleod, S. (n.d.). Questionnaire: Definition, Examples, Design and Types. Retrieved


November 04, 2020, from https://www.simplypsychology.org/questionnaires.html

Morrison, J. (2019, September 20). Assessing Questionnaire Reliability. Retrieved


November 04, 2020, from https://select-statistics.co.uk/blog/assessing-
questionnaire-reliability/

Reliability and Validity. (2020, June 18). Retrieved November 04, 2020, from
https://www.statisticssolutions.com/reliability-and-validity/

11
VIII. Answer Key

Activity 1 Assessment

Answer may vary

Answer may vary

12
IX. Development Team

Development Team of the Learning Activity Sheets

Writer: Kevin Junior P. Gomez, MBA


Editor: Anthony Rayley M. Cabigting, DEM
Reviewer: Jeffrey R. Yumang
Illustrator:
Layout Artist:
Management Team: Engr. Edgard C. Domingo, PhD, CESO V
Leandro C. Canlas, PhD, CESE
Elizabeth O. Latorilla, PhD
Sonny N. De Guzman, EdD
Elizabeth C. Miguel, EdD

For inquiries or feedback, please write or call:


Department of Education – Division of Mabalacat

P. Burgos St., Poblacion, Mabalacat City, Pampanga

Telefax: (045) 331-8143

E-mail Address: mabalacatcity@deped.gov.ph

13

You might also like