Pyc2606 Notes
Pyc2606 Notes
Pyc2606 Notes
PYC2606 Notes
Learning model
° Learning outcome: the purpose of learning achieved by producing specific outcome
products.
° Outcome product: the result of the learning activities a learner engages in during the
learning process. The learner produces outcome products during learning.
° Production method: how an outcome product is produced. A series of actions constitute an
activity, and a series of activities form a method.
° Learning opportunity: work towards achieving the required learning outcomes by
producing particular outcome products. A learning opportunity has three elements
(a) an outcome product
(b) the method for producing the outcome product, and
(c) a reference to the resource required for producing the outcome product.
The outcome product is the most important element, each activity of a learning opportunity
is geared towards the production of the outcome product. An outcome product has to fulfil
certain standards. If a learner produces an outcome product that fulfils certain minimum
criteria the learner can be declared competent.
The purpose of the questionnaire refers to what it intends to measure and for whom it will
be used.
The first step is to identify the general topic of interest, select a problem area within that
topic that you want to investigate, reduce the general problem to more specific questions.
Do they all relate to each other? Can you combine them into one question? Should you
rather choose one question and leave the others for separate questionnaires?
The content domain therefore consists of the tasks, behaviours, attitudes, etcetera related
to one or more of these questions.
Decide on what is relevant. By limiting the scope in this way, you can cover your topic
adequately but do not ask irrelevant questions and still have a questionnaire that is relevant
and not too long.
2. Design a questionnaire
Outcome product
A questionnaire specification document
Method
Activity 2.1: Decide on item format and scaling method
Activity 2.2: Decide on the total number of items
Activity 2.3: Design the layout for the questionnaire
Resource reference
Item format
Layout of the questionnaire
Specification document for a questionnaire
Item format
1 Closed questions
Offers respondents a limited choice of alternate replies whereas an open question is one
that allows the respondents to answer in any way they want to.
yes/no type
true/false type
multiple choice type
Rating scales
2 Open questions
Phrase the question carefully if you want more than just a yes or no answer. Invariably
elicit some irrelevant and repetitious information, also requires a considerable degree of
language proficiency and communication skills.
3 Rating scales
To measure complex or non-factual topics such as opinions, beliefs, attitudes and values.
These are complex issues that have to do with states of mind and are therefore more
difficult to measure. They are usually multifaceted. Therefore, to measure non-factual
topics, the tendency is to use rating scales. The extent to which they agree or disagree
Ratings may be influenced by a person’s mood on the day or by political events in the
country at the time.
Guidelines can be followed when compiling a rating scale:
1 Define the dimension being rated. Each item or statement to be rated must refer to only
one thing or dimension. “Rate friendliness and efficiency”, you are confusing two different
dimensions
2 Decide on the number of ratings for the scale.
3 Decide whether to use an even or uneven number of ratings. Uneven number in order to
have a neutral category in the middle but people may tend to choose the neutral one (error
of central tendency).
4 Define the different rating categories, must be mutually exclusive - each rating category
should mean something different.
Attitude scales are rating scales that consist of a group of items designed to reflect
different attitudes toward the topic in question. Their main function is to classify people
with respect to a certain attitude.
Item Type
1 Do you have a valid driver’s licence? Yes No 1 Closed question - limited choice of answers
2 Why do people need to have a valid driver’s 2 Open question - state their own opinions and
licence? allows for any kind of answer.
3 People should have a driver’s licence (choose one 3 Closed question because there is a limited
answer) choice of answers (multiple choice type).
for identification purposes
to prove that they can drive
in case they have an accident
4 Young people are good drivers. True False 4 Closed question - limited choice of answers.
5 Good drivers are 5 Rating scale, semantic differential type
alert - - - - - relaxed (extreme scale points are opposing adjectives).
cautious - - - - - fast reactors
older - - - - - younger
6 Mark the characteristics of good drivers from the 6 Closed question because there is a limited
Action: Link item format and scaling method to the purpose and content of your
questionnaire - decide what kind of items to use in order to get the information you want.
Information required
• age - under 18 years 18 - 22 years 23 - 35 years 36 - 50 years
• gender - closed (check male or female)
• socio-economic status
• personal experience of crime- closed question with a yes/no, how much or how
often they personally experienced crime- use a multiple choice item or a rating
scale, general description- use an open ended question.
• levels of stress associated with different crimes - rating scale
• personal reactions to different crimes - simple open ended question or you might try
a rating scale like a semantic differential.
What a questionnaire should contain. A specification document is really just a list of the
required characteristics for your questionnaire in terms of type of items, number of items,
layout and so on, in order for the questionnaire to do what it is supposed to do.
Before compiling a questionnaire, have a rough idea of the line of enquiry you wish to
follow, the kind of questions you will ask, the level of language you use, how complex the
questions are and so on. In this way, the purpose of the investigation, the kind of
information you want and the characteristics of the respondents influence the questionnaire
specifications. The detailed specification of measurement aims should be clearly related to
the purpose of the research.
Action: You need to identify the coverage required for each content area. You need at least
one item on each of these content areas. In some cases, one item is not enough. For
5
example, if you want information on stress levels associated with different crimes, you
might want to use a rating scale. Rating scales do not have a fixed number of items but for
the purposes of this assignment, your rating scale should consist of at least
twelve items. It is also useful to have more than one item dealing with the same aspect to
serve as a control so that you can see whether the respondent is answering questions
consistently or not. For example, in addition to your rating scale, you might also have an
open ended question that deals with the same content area.
Action: You should evaluate the impact of characteristics of respondents and the time
available for completing the questionnaire.
You could cover the content domain comprehensively with 21 items (some of which may be
grouped into a rating scale containing approximately twelve items). We could break down
the coverage of the content areas as follows: the first three items would be closed
questions to collect biographical information, then a filter question (closed, yes/no type)
followed by an open question on personal experience of crime, a rating scale (consisting of
twelve items) on levels of stress associated with different crimes, a closed (multiple choice)
question on personal reactions to crimes and an open question to serve as a control, an
open question on perceptions of the effect of crime and lastly an open question for any
other comments the respondent may wish to add. Therefore have five closed items, four
open items and a twelve item rating scale (total of 21 items). The questionnaire should not
be too long or complicated.
1. Try to avoid putting ideas into the respondents minds or suggesting preferable
attitudes. Start with open questions and then introduce more structured questions.
2. Start with a broad question that orients the respondent to the topic, followed by the
twelve item rating scale. (moving from the general to the more specific?) - the funnel
approach.
3. Better to put personal data questions near the end, preceded by a short explanation
such as “To help us classify your answers...... Items on biographical information - only
a few items, at the beginning but if there are a lot of items better at the end.
4. You probably have groups of questions relating to particular aspects of your main topic.
Decide on the order in which to present these groups of questions. Two main
considerations: the logic of the survey and the likely reactions of the respondents. Start
off with ‘awareness’ questions relating to the topic in general followed by ‘factual’
questions dealing with the respondents’ own actions or behaviour. Then you might
include questions on likes and dislikes, preferences and attitudes.
5. Sensitive or very personal issues should come toward the end of the questionnaire to
avoid embarrassing or offending the respondents. A closed question and an open
question serve as a sort of validity check for this content area.
6. Place one or more open ended questions at the end to allow the respondents to express
opinions or feelings related but have not been covered by the questions. Respondents
are more likely to feel satisfied that answering the questions was worth the effort.
6 Filter questions
Start with a filter or screening question that excludes some respondents from answering
irrelevant questions. If the answer is no, skip the next few questions.
Introduction
All research is aimed at finding answers Questions may arise from anomalies or gaps that a
researcher has found in existing theories, from a need to solve a practical problem, or just
personal curiosity and intuition. Good items are critical to the success of a research project.
They produce reliable data and accurate information upon which valid conclusions can be
based.
1 The items should be based on a meaningful definition or description of what you want to
measure.
2 Constructing items is a science - requires an in-depth knowledge of one’s topic and
familiarity with the principles governing good item design. And art - requires creativity in
selecting or constructing items appropriate to the particular context.
3 The items should be aimed at obtaining meaningful information with a minimum of
distortion.
4 Careful thought must be given to the relevance, language level, cultural interpretations,
and clarity of the items. Important that it is reader-friendly. Avoid items that are
humiliating, confusing, or make respondents feel inadequate.
Recommended that researchers use well-known questionnaires, of which the reliability and
validity have already been established. You must critically scrutinise each item.
There may be no existing questionnaire that taps the particular construct you intend to
investigate or you may have to eliminate a number of unsuitable items.
3.4 Clarity
If anything in your questionnaire is not understood and/or misinterpreted your
results will be useless.
● Do avoid ambiguity, interpreted in a number of ways. Visiting lecturers can help one
feel less isolated.
(Does this mean that the lecturers do the visiting - or do the students?)
● Don’t ask questions with two inherent issues.
I am fully occupied and I don’t feel lonely.
Rather break such questions or statements into two separate items.
● Do scrutinise any items that contain the conjunctions ‘and’ or ‘or’ to see if they contain
more than one possible issue.
● Wherever possible don’t use negatives
● Do use active rather than passive statements. Passive statements are more difficult
to understand, and therefore more difficult to respond to.
It is believed by students that they will be given extension by lecturers.
The following is simpler:
Students believe lecturers will give them extension.
● Do ask specific questions rather than general or vague questions. General items may
not be interpreted in the same way by everyone, and thus produce unreliable answers.
● Do write items that are specific, simple, clear, and to the point.
Action: Evaluate existing items according to these criteria. Shortcomings of the following
questions:
1 What is your income?
vague
2 Don’t you disagree with yesterday’s Parliamentary decision regarding smoking and
drinking? (Yes/No)
leading question which contains two inherent issues and a double negative.
3 We should be less passive about what is happening in the environment.
(agree/uncertain/disagree)
vague. “Who is ‘we’?”, “Less passive than what?”, and “What environment?”
4 I feel depressed and sad. (never/sometimes/often/all the time)
two inherent items. ‘Depression’ and ‘often’ may mean different things to different
people.
5 How often do you take drugs? (never/sometimes/often/all the time)
imprecise and may be interpreted in various ways.
6 Abortion should not be legalised. (agree/disagree)
too global
7 Most men are more emotionally stable than most women are. (agree/disagree)
is a leading question
8 Suppose you are measuring Unisa students’ level of motivation and one of your items is
“How many hours do you spend studying each week?”
does not necessarily relate to motivation
10
Introduction
Improve your questionnaire further by actually trying it out and seeing how people respond
to each item. In particular, you will use simple item analysis techniques to improve the 12-
item rating scale that forms part of your questionnaire.
Action: Administer the questionnaire to the sample. Be sure to be ethical about what your
are doing. The answer to each of the questions should be YES.
Keep notes of what kinds of questions people ask and what difficulties arise - to make
improvements. When you get the questionnaire back quickly scan it to see that they have
completed all of it.
Action: There are two ways of using a pilot study to improve your questionnaire. Use what
happened during the study. Look again at the notes you made while you were administering
the questionnaire. Now write a short summary of the changes.
Item analysis
11
Procedures to select the best items for inclusion, commonly used criteria are item
difficulty (item facility or item variance) and item discrimination.
1 Item difficulty/variance
Ideal questionnaire is where about half the people gets each of the items right. Item
analysis involves discarding items that are too easy or too difficult. The difficulty index for
an item is usually calculated by dividing the number of people who gave a correct response
by the total number of people in the sample. The difficulty index should be between 0.25
and 0.75 and the average difficulty should be about 0.5.
2 Item discrimination
The ability of an item to discriminate between respondents according to whatever the
measuring instrument as a whole is measuring. Items should only be selected if they
measure the same characteristics - else, they lose focus. The higher the correlation
coefficient, the more discriminating the item. A minimum correlation of 0.2 is generally
required. Items with negative or zero correlations are almost always excluded. A negative
correlation could be indicative that an item should have been reverse scored.
Statement 1 2 3 4 5
Never Almost Sometimes Most of Always
never the time
I like loud √
music.
I prefer quiet √
places.
I enjoy noisy √
environments.
The ticked options are known as item responses. Item 2 in our example should be ‘reverse-
scored’, because if somebody says s/he never likes quiet places s/he is, in effect, saying
that s/he always likes noisy places and she should therefore get a high score.
Statement 1 2 3 4 5
Never Almost Sometimes Most of Always
never the time
I like loud √ 5
music.
I prefer quiet √ 5
places.
REVERSE
12
SCORE
I enjoy noisy √ 4
environments.
The data sheet is divided into rows and columns - one row for each person in your sample
and one column for each item in your rating scale and total score. Take a questionnaire
and transfer the score for each item to the first row on the data sheet.
Calculate each respondent’s total score, lowest anybody can have on the rating scale is 12
highest is 60.
Action: Start with the actual item analysis of your rating scale. Find items with too little
variance where almost everybody in the sample gets the same item score. You want your
scale to show differences between people.
Compare items and decide which are better items in terms of the amount of variance they
show.
Run your eye down each of the columns on your data sheet and look for items that may not
have sufficient variance. If a column contains mainly only one number the item doesn’t
show much variance. If a column contains a good spread of numbers the item shows lots of
variance.
It is not always possible to explain why most people end up answering an item in the same
way - the item may have been too extremely worded, they are too vague or that there is a
strong ‘socially desirable’ way of responding.
Correlation coefficient
The statistical relationship between two constructs is called a correlation. A value close to 0
indicates a weak relationship while 0 represents no correlation. The numerical size of a
correlation coefficient indicates the strength of the relationship while the sign positive
/negative) indicates the direction of the relationship.
The graphic display of the correlation coefficient. If there is a perfect positive relation
between two constructs (a correlation coefficient of +1) the dots form a perfectly straight
line with an upward slope. For a a correlation coefficient of -1 the scores form a perfectly
straight line with a downward slope. No relation (a correlation coefficient of 0) between two
constructs results in an undefined shape.
13
If the item correlates strongly with the total score, we know that it measures more or less
the same thing as the other items.
Action: To measure differences between people our items need to show some variance, but
even if the items show lots of variance, the scale may not measure anything in particular.
Ensure that each item in the scale measures more or less the same thing and that the
items are not too divergent. You want an item to discriminate between high and low scorers
because it shows that the item measures more or less the same thing as the other items in
the scale.
Item 3 does seem to be pretty good at discriminating between high and low scorers.
Looking whether item scores correspond with a total score is called item-total correlation.
Professional questionnaire constructors usually calculate a <correlation coefficient> (an
index of how strongly two variables are related) to establish how strong each item-total
correlation is.
Item
Each dot on the scatterplot represents a person. Dots are arranged roughly in a diagonal
line from bottom left to top right. This means that there is a strong correlation between the
item score and the total score - the item discriminates well. If the dots don’t seem to form a
pattern at all then there is no correlation, if the line seems to go from top left to bottom
right then there is a negative correlation (the item does discriminate, but the wrong way
round, so it is no good). You will have to draw 12 scatter plots (one for each of the 12
items).
14
Identify what appears to be the worst items in your scale in terms of failure to discriminate.
The reason why items don’t discriminate is usually because they measure something
different from the other items in the scale. Sometimes the wording of the item, but
sometimes it seems inexplicable and one just has to accept that it is so.
Action: Compile a final version of your questionnaire. Your scale should have
8 items, so discard 4 items. Study the list for items that don’t show much variance and the
other list for items that don’t discriminate well.
Your 8 item scale is more coherent and has a greater degree of reliability.
Introduction
The results should be reliable, that is the questionnaire should measure consistently. You
will evaluate the reliability of the final version of the rating scale included in your
questionnaire. The interpretations based on the results should also be valid, that is it should
measure what it claims to measure.
Reliability
Various conditions might affect the results of the questionnaire e.g. the occasion on which
the questionnaire is administered or the sample of items in the questionnaire. Their effect
on the results is unpredictable and inconsistent. These irrelevant conditions are called
unsystematic sources of variation. The reliability refers to the consistency of results
over different administrations involving different occasions, test forms, etc.
A statistical index of reliability is the reliability coefficient, range between 0 and 1.
Unreliable questionnaire - reliability coefficient close to 0
Reliable questionnaire - reliability coefficient of 1.
The closer the value of the reliability coefficient to 1, the more reliable the test.
15
How consistent the results of a questionnaire are over different occasions, administer the
same questionnaire to the same group on two consecutive occasions. Scores are correlated
and the correlation coefficient represents the degree of test-retest reliability. The closer the
correlation coefficient is to 1, the more consistent. Test-retest reliability thus indicates
stability or consistency of scores over time.
A perfect correlation does not indicate that the second scores were identical; a person's
relative position to that of the others in the group stays the same. The time interval should
be at least several days to reduce the possibility of effects such as familiarity with the type
of items or respondents remembering their answers.
3 Evaluating reliability
The nature and purpose of the questionnaire determines which type of reliability is
appropriate. A psychological test such as an intelligence test, the reliability coefficient
should be above 0.90. A reliability coefficient of 0.70 can be useful if the results are used in
combination with other information about the individual or group.
Action: You should be able to distinguish between different types of reliability. The purpose
of the questionnaire determines which type of reliability is appropriate.
The internal consistency of a rating scale - the extent to which the items measure the same
thing. Obtain an estimate of the split-half reliability of this rating scale, the degree of
equivalence between two halves of the rating scale. A limitation of this method is that the
16
reliability coefficient that one obtains, to some extent depends on the items included in
each of the two halves.
Action: Look at these eight items and divide the rating scale into two halves by grouping
the odd items and even items together. Now re-number your items from 1 to 8. For each
person calculate the total score for the odd items and for the even items.
1 3 5 7 Total 2 4 6 8 Total
Score Score
1
2
3
4
The relation between these two sets of scores will give you an estimate of the reliability of
either of the two halves of the rating scale.
For each person, take the total score on the odd items and the total score on the even
items and where the two meet you make a dot on the graph. Draw a straight line
resembling the shape of the scatterplot.
If your scatter plot has a very undefined shape, the correlation coefficient is close to 0
indicating a weak relation between the two halves. If the line has an upward slope, the
correlation coefficient falls between 0 and +1. If most dots are close to the line, the
correlation coefficient is close to +1 and there is a fairly strong relation
Action: In this context the correlation coefficient is a reliability coefficient. Values closer to
1 indicate a more reliable rating scale.
Validity
The extent to which it measures what it claims to measure. The extent to which the scores
can be used for the intended purpose. There are three categories of gathering validity
evidence; content validity, criterion-related validity and construct validity.
1 Content validity
The content validity is determined by the degree to which the items in the questionnaire
are representative of the universe of tasks, behaviours or attitudes (the content domain)
that it was designed to measure. Content validity, can be ensured by proper design.
Content validity cannot be expressed in terms of a quantitative index.
Face validity refers to the degree to which items appear to be relevant. Content validity is
based on the subjective evaluation by people who are not necessarily experts either in the
particular area or in psychometrics. If the respondents do not regard the items as relevant
(the questionnaire does not have sufficient face validity), they might be less motivated and
even unwilling to cooperate.
2 Criterion-related validity
The criterion-related validity of a questionnaire is the extent to which the scores on the
questionnaire are effective in estimating an individual's position or performance on the
relevant criterion. Approaches to gathering evidence of criterion-related validity are
concurrent validity and predictive validity.
17
With concurrent validity measures are obtained on the criterion at approximately the
same time as the scores on the questionnaire. The extent to which scores accurately
estimates an individual's present position on the relevant criterion is then determined.
Determined if you want to use your questionnaire to identify some current behaviour or
status of individuals.
For example, you want to classify psychiatric patients according to their disturbances. Take
a representative group of psychiatric patients and administer it to them. At the same time
you would ask psychiatrists or clinical psychologists to classify these patients according to
type of disturbance.
To evaluate predictive validity the measures on the criterion are obtained in the future. It
is then determined to which extent the scores accurately predict an individual's scores on
the relevant criterion. Determined if you want to use your questionnaire to predict some
future performance of individuals. For example, to select candidates for entrance into this
course take a representative group of students applying for the course and administer your
questionnaire to them.
At the end of the course you could obtain the students' examination marks. You will then
determine how effective scores on your questionnaire are in predicting the students’
examination marks.
To determine criterion-related validity calculate the correlation between the results and the
measures on the criterion the resulting correlation coefficient is known as the validity
coefficient.
3 Construct validity
A construct is an unobservable quality which forms part of a theory designed to explain
observable behaviour. For example, anxiety is not observable but it forms part of a theory
that explains observable behaviours
You have to define your construct in terms of observable behaviours. You can thus define
the construct validity as the extent to which it indeed measures the theoretical construct
it aims to measure. Construct validity cannot be expressed in terms of a single validity
coefficient. You would expect groups who are supposed to differ in terms of a construct to
also obtain significantly different scores on a questionnaire measuring this construct.
Another way to determine construct validity is to look at the correlation coefficients
between different questionnaires.
Convergent validity. - if two questionnaires measure the same construct you would expect
the scores to be significantly correlated.
Discriminant validity - if two constructs are theoretically unrelated, you would not expect a
high correlation.
18
Action: Consider the content domain of your questionnaire and the questionnaire
specification document and evaluate the content validity of your questionnaire.
6 Compile a manual
Outcome product
A manual for the questionnaire consisting of a description of the aim and design, an
evaluation of the properties, and procedures for administration, scoring and interpretation.
Method
Activity 6.1: Discuss the process of developing the questionnaire
Activity 6.2: Compile a manual
Resource reference
Manual: The purpose and structure of a manual
1 Purpose of a manual
Someone else might be interested in using your test or questionnaire. Report the process of
analysing and selecting the items as well as the reliability and validity of the questionnaire.
Give instructions for the administration of the questionnaire for the scoring of the
questionnaire and some guidelines on how to interpret the results.
2 Structure of a manual
19
define the target population. It should also be mentioned when and under which
circumstances the questionnaire was administered to them. Describe each technique used
for item analysis and you should indicate which criteria were used to justify the inclusion
or exclusion of items in the item selection process. Important for the user to know how
reliable or consistent the questionnaire is. Give a brief description of the method used to
determine reliability and justify why this was used. The estimated reliability coefficient is
then evaluated in terms of an acceptable level of reliability. Identify the category of validity
(be it content validity, criterion-related validity or construct validity) that is relevant for
your questionnaire.
7 Evaluate a questionnaire
Outcome product
An evaluated questionnaire
Method
Activity 7.1: Explore a questionnaire rating scale
Activity 7.2: Use the questionnaire rating scale to evaluate a questionnaire
Activity 7.3: Compare your evaluations to the QWAN
Resource reference
Content domain: Identify the content domain for a questionnaire
Item format
Layout of the questionnaire
Suitability of a questionnaire as measuring instrument
Writing questionnaire items
1. Questionnaire instructions
20
2. Item characteristics
21
Rate 0: all items are unanswerable (i.e. the choice of provided responses does not fit
the item, or the respondent does not have the required information)
Rate 1: most items
Rate 2: some items
Rate 3: none items
3. Questionnaire characteristics
Questionnaires should have sufficient items to cover the topic but they should not be too
lengthy. Items should be presented in a particular order to:
(a) counter response style and item bias
(b) increase the efficiency of the questionnaire. By grouping questions and by using filter
questions, and a good balance of different question types.
(c) be sensitive towards respondents. Put respondents at ease by incorporating neutral and
interesting questions at the beginning, and forbidding, sensitive and personal questions
towards the end. Allow respondents to raise opinions and vent feelings by providing open
ended questions at the end.
4. Questionnaire functionality
The issue to be evaluated is whether the questionnaire is structured in such a manner that
it can function maximally in the light of its declared purpose. Does the structure of the
questionnaire (kinds of items and the sequence) support the questionnaire’s functionality
(i.e. what the questionnaire could be used for, what it is capable of) given its declared
purpose (i.e. the kind of information it is expected to deliver). In other words the structure
and functionality of a questionnaire is a function of its declared purpose.
22
8 Evaluate a manual
Outcome product
An evaluated manual
Method
Activity 8.1: Explore a manual’s rating scale
Activity 8.2: Use the manual’s rating scale to evaluate a manual
Activity 8.3: Compare your evaluations to the QWAN
Resource reference
Manual: The purpose and structure of a manual
23
Proper communication requires a logical presentation and clear and correct language. The
manual should start with a description of
(a) the nature of the questionnaire, then discuss
(b) the functionality of the questionnaire, and conclude with
(c) instructions for using the questionnaire.
The logical sequence would be (a), (b), (c).
A manual should cover ten different content topics kept together in their distinct groups,
namely the three purpose areas.
Effective communication requires clear, precise and correct language, written in short,
direct sentences to enable unambiguous and precise communication. Technical information
should be simple and straightforward.
24
2.5 How items were analysed and selected for the questionnaire Rate 0: not sufficient
information about the analysis and selection of
questionnaire items
Rate 1: describes one of the following:
(1) the technique used for item analysis
25
(2) the criteria used for including items in or excluding items from the questionnaire
Rate 2: describes both (1) and (2).
2.10 Guidelines for interpreting the information obtained via the questionnaire
Rate 0: does not provide instructions
Rate 1: does one of the following:
(1) provides instructions for interpretation of the information
(2) explains how the interpretation fits the aim
Rate 2: does both (1) and (2).
26
The MPQ measures preferred behaviour style in terms of drive (the ability to get things
done personally), interaction (the ability to work with people), management (the ability to
keep systems going) and regulation (the ability to adhere to rules and regulations). Each
factor is measured on a ten point scale.
To counter response bias the sequence in which the action descriptions are provided is
varied randomly. Nine hundred university students used in the development of the
questionnaire.
The original MPQ consisted of 145 items. Item analysis showed 63 really good but that the
remaining 82 items did not meet the criteria to be included. Thirteen were excluded thus 50
items were retained. The validity coefficient of 0,91 is high.
27