0% found this document useful (0 votes)
15 views

Operationalisation

The document discusses the processes of conceptualization and operationalization in research, defining how concepts are made specific and measurable. It distinguishes between direct observables, indirect observables, and constructs, emphasizing the importance of operational definitions for clarity in measurement. Additionally, it covers the significance of validity and reliability in research measures, highlighting the need for careful operationalization to ensure meaningful results.

Uploaded by

mesudmesud2010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Operationalisation

The document discusses the processes of conceptualization and operationalization in research, defining how concepts are made specific and measurable. It distinguishes between direct observables, indirect observables, and constructs, emphasizing the importance of operational definitions for clarity in measurement. Additionally, it covers the significance of validity and reliability in research measures, highlighting the need for careful operationalization to ensure meaningful results.

Uploaded by

mesudmesud2010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Conceptualization, Operationalization and Measurement

Conceptualization is the mental process whereby blurred and imprecise notions


(concepts) are made more specific and precise. It involves specifying the meaning
of the concepts and variables to be studied.

Operationalisation is the process of converting a concept into a measure. This


measure can be a variable, constant or scale depending upon the situation.
Operationalization is one step ahead of conceptualization. It is the process of
developing operational definitions. Operational definitions refer to the concrete
and specific definition in terms of the operations by which observations are to be
categorized. The operational definition of "earning an A for this course" might be
"correctly answering at least 90% of the mid-term and final exam questions
correctly."

We will consider several examples, some good, and some bad.

Abraham Kaplan (1 964) distinguishes three classes of things that scientists measure. The
first class is direct observables: those things we can observe rather simply and directly, like
the color of an apple or a check mark on a questionnaire. The second class, Indirect
observables, requires "relatively more subtle, complex, or indirect observations” (1964: 55).
When we note a person's check mark beside "female" in a questionnaire, we've indirectly
observed the person’s gender. Finally, the third class of observables consists of constructs -
theoretical creations that are based on observations but that cannot be observed directly or
indirectly. A good example is intelligence quotient, or IQ. It is constructed mathematically
from observations of the answers given to a large number of questions on an IQ test. No
one can directly or indirectly observe IQ. It is no more a "real" characteristic of people than
is compassion or prejudice.

Kaplan (1964: 49) defines concept as a "family of conceptions." A concept is, as Kaplan
notes, a construct, something we create. Concepts like compassion and prejudice are
constructs created from your conception of them, my conception of them, and the
conceptions of all those who have ever used these terms. They cannot be observed directly
or indirectly, because they don't exist. We made them up.

To summarize, concepts are constructs derived by mutual agreement from mental images
(conceptions). Our conceptions summarize collections of seemingly related observations
and experiences. The observations and experiences are real, at least subjectively but
conceptions and the concepts derived from them are only mental creations. The terms
associated with concepts are merely devices created for the purposes of filing and
communication. A term like prejudice is, objectively speaking, only a collection of letters.
It has no intrinsic reality beyond that. It has only the meaning we agree to give it.

Constructs aren't real in the way that trees are real, but they do have another important
virtue: They are useful. That is, they help us organize, communicate about, and
understand things that are real. They help us make predictions about real things. Some of
those predictions even turn out to be true. Constructs can work this way because, while
not real or observable in themselves, they have a definite relationship to things that are
real and observable. The bridge from direct and indirect observables to useful constructs is
the process called conceptualization.

Conceptualization

The process through which we specify what we mean when we use particular terms in
research is called conceptualization. Suppose we want to find out, for example, whether
women are more compassionate than men. I suspect many people assume this is the case,
but it might be interesting to find out if it's really so. We can't meaningfully study the
question, let alone agree on the answer, without some working agreements about the
meaning of compassion. They are "working" agreements in the sense that they allow us to
work on the question. We don't need to agree or even pretend to agree that a particular
specification is ultimately the best one.

Conceptualization, then, produces a specific, agreed-on meaning for a concept for the
purposes of research. This process of specifying exact meaning involves describing the
indicators we'll be using to measure our concept and the different aspects of the concept,
called dimensions.

Indicator An observation that we choose to consider as a reflection of a variable we wish


to study Thus, for example, attending religious services might be considered an indicator
of religiosity Dimension A specifiable aspect of a concept. "Religiosity," for example,
might be specified in terms of a belief dimension, a ritual dimension, a devotional
dimension, a knowledge dimension, and so forth.

An operatlonu1 definition, as you may recall from Chapter 2, specifies precisely how a
concept will be measured-that is, the operations we choose to perform. An operational
definition is nominal rather than real, but it has the advantage of achieving maximum
clarity about what a concept means in the context of a given study. In the midst of
disagreement and confusion over what a term "really" means, we can specify a working
definition for the purposes of an inquiry Wishing to examine socio-economic status (SES)
in a study, for example, we may simply specify that we are going to treat SES as a
combination of income and educational attainment. In this decision, we rule out other
possible aspects of SES: occupational status, money in the bank, property, lineage, lifestyle,
and so forth.

Our findings will then be interesting to the extent that our definition of SES is useful for
our purpose

Concepts

Concepts are the 'things' that we are trying to measure in research. Here are a few
examples:
• Motivation
• Height
• Self perception
• Aggression
• IQ
• Weight
• Skill
• Health
• Heart Rate
• Performance level

Concepts are often divided up into those that can be measured 'directly' and those
that can not be. Those that can only be measured by their 'shadows' are referred to as
'Constructs'. Using this scheme Height is measured easily, but motivation can only be
measured by a possible set of different measures.

Dimensions of a Concept

In the above examples, the concept height only requires one measure to adequately
describe it. However, IQ requires a whole set of measures relating to particular aspects
of it:

• Memory
• Motor Skills?
• Comprehension
• Analysis

Each of the above aspects can be considered to be different factors or dimensions of the
concept. We could say therefore that IQ is multidimensional (i.e. having more than one
dimension).

Measures

A measure can be a variable, constant or scale depending upon the situation.

Variables & Constants

A constant is a characteristic that has the same value over the entire sample (it does
not vary). A variable is something that can take more than one value. Variables can
sometimes be subdivided into dependent and independent variables.
A dependent variable also called the effect, response or outcome variable is what the
researcher measures before and / or after manipulating the independent variable
(also called the treatment or intervention variable). Often the independent variable is of
nominal measurement type. The researcher may actively manipulate the independent
variable and see what effect this has upon the dependent variable or, passively observe
thereby collecting a series of measurements. In the second situation, she will have to
work out the direction of the relationship by investigating which variable changes first.
To do this she will investigate the lag/response time between changes in the
independent variable to the observed change in the dependent variable. For example
to observe the effect of amphetamines (i.e. independent variable) upon running
performance (i.e. dependent variable) it would be necessary to wait until the drug had
been absorbed through the stomach.

Often the term 'measure' is used to refer to the dependent variable. I will use the term to
include both independent and dependent variables.

Example 1 Active manipulation = assigning subjects to several groups each


receiving a different
regime.

Example 2 Passive observation = observing over a series of years the effect different law
enforcement strategies have upon road traffic accidents.

Example 3 A researchers investigating the effects of two different types of training


programs on half marathon performance. She randomly assigns half her subjects to one
type of training and the rest to the other. She measures performance pre and post
training program for all subjects.

Independent variable (factor) = Training type = nominal measurement (two levels)


Dependent variable ('measure') = performance = measurement?

In other words, the value the dependent variable depends upon the independent
variable. This is a causal relationship and is different from a correlation.
Scales
These are also called indexes. This is a measure that consists of two or more items. These
items are themselves measures. The best way to understand this is to give a few
examples:
Example 1 Socio-economic status = Income + Education + ?
Example 2 Activities of daily living scale =
Eating
Dressing / undressing
Care for own appearance
Walking
Done the above tasks for 6 months
For each item a score of 1 is assigned if the patient was unable to do the activity, 2 if the
patient was able to do the activity with assistance and 3 if the patient required no
assistance.
Graphical Method of Representing Operationalisation
Because operationalisation is so important and how well it has been done often provides
the litmus test for a piece of research. I suggest you get into the habit when reading
research articles of using the following method to analyze the various concepts and
measures presented.
Things to note:
The ovals represent concepts, the researcher may well consider several concepts and
decide upon measuring a particular smaller concept. For example in the above a
researcher may decide only to concentrate on physical fitness rather than mental fitness.
This is fine as long as the researcher is
aware of the limitations. It would be misleading to title an article fitness evaluation if
s/he did take this approach.
The squares represent measures, where groups ('batteries') of measure provide a score of
some sort have placed them in another box. Clearly, this is just my way of helping me
understand the situation.
Validity
We can now provide a little more clarity in our definition of validity. We can now say
that validity of as measure is the degree to which the measure or measure equals the
concept which it / they are attempting to measure.
Often researchers use well validated measures rather than trying to attempt to develop
their own. Besides the obvious advantage that this is a valid measure two other
advantages are:
• Less effort is required to repeat an experiment
• Results can be compared with previous experiments
Relationship with Reliability
Reliability is the degree of consistency of a measure or degree of error of a measure. That
is if all else stays the same we would expect to obtain the same value repeatedly from the
measure if it were reliable.
It should be noted that a measure can be reliable but not necessarily valid. A researcher
may believe s/he is measuring perceived exertion by measuring heart rate with a heart
rate monitor. Most likely the heart rate monitor provides a very accurate measure of
heart rate but it may not reflect the level of perceived exertion the subject is feeling. You
can therefore have a measure that is very reliable but totally invalid. Lets now consider
the opposite situation.
Can we have a measure that is valid but not very reliable? A valid measurement implies
a relatively accurate measure (Portney & Watkins 1994 p.69). For example, if you decide
to measure heart rate by palpating the radial artery and using a watch with only a minute
hand for timing you are unlikely to produce accurate heart rate measurements. You may
assume you are measuring heart rate but most of the time the results will not reflect this
concept.
Types of Validity
There are several types of validity, only one is 'validity of measurements' Further details
can be found in (Robson 1993, Portney & Watkins 1994, Graziano & Raulin 1993, Cook &
Campbell 1979). Details of these other types of validity are given in subsequent chapters..
We will consider a little about validity of measurements in this chapter besides the
references given above one article to start with is Taylor J A review of Validity Issues in
Sport Psychological Research. Journal of Sport Behaviour 10, 1 3-12. Be warned this is a
difficult read and uses different terminology to me in several instances.
We will discuss just three types of validity of measurements; Face (logical), Content and
construct.
Face (logical), Content and Construct Validity of Measurements
Face (logical) validity is the degree of belief that the researcher has that the measure
appears to measure what it is supposed to. It is therefore often called a qualitative
measure.
Construct validity is the degree to which the constructs of interest have been
effectively operationalised (Cook & Campbell 1979). Content validity is the same thing
but this time it can be either a construct or a concept. This is, at least, how l interpret the
situation between construct and content validity, l always feel a little uneasy about this, if
you think I'm wrong l would be quite happy to discuss it with you. Cook and Campbell
and Robson 1993 do not mention content validity (see Cook & Campbell 1979. p.38 and
p.59). Portney and Watkins 1993 p.71-77 provides a detailed description of each type.
Oppenheim 1992 p.162 considers content validity to be concerned with how well
balanced the measures are in relationship to the 'content domain' to be measured. But
fails to define content domain.
l believe both content and construct validity can be expressed using, our diagram
developed to show the process of operationalisation, except this time we work
backwards.
From the above you will have begun to realize that in reading the literature you will find
many differing and conflicting definitions of validity. This is particularly the case
concerning the various types of validity of measurements,.
Importance of Good Operationalisation
The importance of carrying out this process correctly cannot be underestimated. Clearly,
a whole research project may be a total waste of time if it is not measuring what it is
supposed to. This process is equally important for the independent and dependent
variables.
Context Specific
A context is another word for situation. Some measures work well in all situations
(contexts) e.g. a ruler for measuring height. However, some only work well in specific
situations e.g. a mercury thermometer would not work on the moon? Because of this, the
validity of a measure is said to be context specific.
Obviously part of the context consists of the researcher(s) and the subjects. This suggests
that a measure developed using an all female group of subjects may not necessarily be
equally as valid when used on a group of males.
Summary
The above section has mainly been concerned with the process of operationalisation.
However, to understand this process fully it was necessary to discuss what concepts,
constructs and measures are along with 'validity of measurements'. Various types of
variables and causal / correlation relationships were also presented.

You might also like