0% found this document useful (0 votes)
12 views

Quant Methods

The document outlines a session on quantitative methods and measurement design, highlighting the importance of validity and reliability in research. It discusses various types of validity, the differences between tests and rating scales, and the process of developing measurement instruments. Additionally, it emphasizes the need for systematic procedures in measuring constructs and the significance of operational definitions in research.

Uploaded by

teddybera1990
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Quant Methods

The document outlines a session on quantitative methods and measurement design, highlighting the importance of validity and reliability in research. It discusses various types of validity, the differences between tests and rating scales, and the process of developing measurement instruments. Additionally, it emphasizes the need for systematic procedures in measuring constructs and the significance of operational definitions in research.

Uploaded by

teddybera1990
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Quant.

Methods &
Measurement Design
“If we have data, let’s look at data. If all
we have are opinions, let’s go with
mine.”
What questions do you have as
we get started?
 Reading for this week: Kettler Chapter 3
(measurement design/development)
 We’ll discuss the reading briefly, but NOT
everything!

 Planning Guide 1 (P1) – feedback posted on


Reminders Blackboard

& To-Do  P2 – in class today

 To-do for next week:


 Merriam & Tisdell chapters 1, 3, & 4
 P3, P4, & P5 (in class)
 Reflective journal posting 1 (after classes next week)
 Quick revisit of validity – we’ll discuss more in
week 4

 The quantitative approach – very broadly


 Correlations, experimentation, and beyond!

Plan for  Assessment & measurement - briefly


Today  Direct & indirect measures
 Tests vs. rating scales
 Process & instrument development as research

 Planning Guide 2 – brainstorming and initial


research question drafts
 Kettler (2019) discusses four types of validity:
 Statistical conclusion validity

 Internal validity

Some quick  Construct validity

notes about  External validity


Validity
 We’ll discuss validity in more depth in ~3 week,
but…
 I don’t think he does a great job of making it
accessible right off the bat!
 Validity & reliability are often used to describe
measures that we use in research
 E.g., is the Wechsler Intelligence Scale for Children a
valid measure of intelligence in childhood

 Validity can be thought of as accuracy, and reliability


is then consistency/precision
So, Validity
in General  A measure having good reliability does NOT mean
that it has good validity
 We can be consistently wrong!

 However, we do need to have reliability in order to


have validity
 If a measure isn’t reliable, it can’t be valid
Thinking about it Visually
 Are the conclusions you make from your analysis (about
the relationships among variables) reasonable based on
the data & your question?
 You didn’t make a Type I error (false positive)
 You didn’t make a Type II error (false negative)
Statistical
 Depends on using the correct analysis/analyses
 For example, let’s say we gathered data on whether Conclusio
or not study subjects believe that humans are the
product of evolution by natural selection: n Validity
Yes/No/Unsure
 If we use something like a t-test to analyze these
data, we’re violating the core assumptions or
requirements of that test – that leads to problems
with this type of validity
 Does the evidence actually support the claims made
about a causal relationship between two variables?

 A study finds that the correlation between ice cream


sales and motorcycle sales is +0.89 – a very strong

Internal
positive correlation
 As a result, the researchers claim that buying

Validity
motorcycles makes people buy ice cream due to the
excess heat and wind as compared to driving a car.

 Does the evidence support their claim?


 NO!

 Lots of things impact internal validity – not just


experimenter bias/mistakes!
 Does your measure actually measure what it’s supposed
to?

 For instance, does my new intelligence test actually


measure intelligence, or does it just measure current
knowledge?
Construct
 This is based on a number of things:
 Characteristics of the construct (what is intelligence?)
Validity
 Alignment with previous measures (do results for my
new test correlate with the WAIS?)
 Lack of alignment with measures of other constructs
(my intelligence test shouldn’t be as strongly
correlated with a test of neuroticism)
 How generalizable or transferrable are your results?

 My dissertation research looked at open learning about car


safety systems among college students
 How well might those results generalize to…
 …all college students learning about car safety
systems? External
Validity
 …new drivers learning about car safety systems?
 …adults learning about car safety systems?
 …children learning about car safety systems?

 Depends on (among other elements):


 The context of the study
 The context you hope to generalize to
 Sampling/selection procedures, other potential biases
Questions about
validity?
Quantitative
Research

What are the defining characteristics?


 Focus on quantities
 Test scores, ratings, etc.

 Most often large, random samples


 Single-subject designs can be an exception!

Quantitative  Evaluating theories and hypotheses using


Research objective (as much as possible) measures
 Validity/reliability are crucial

 Deductive process
 “Top-down”
Deductive Reasoning in Quant. Research

• “Top-down” reasoning
• Move from more general
(theory) to more specific (case)
• Drawing a specific conclusion
based on a general premise
• Start with theory:
• Make a hypothesis
• Test
• Confirm or refute
CAUSATION
 X caused (is responsible for) the change in Y

 Experimentation allows us to find this


 Controlling all other variables

Types of
Relationships CORRELATIONS/ASSOCIATIONS
 X and Y typically change together
 -1 to 1
 0 is no relationship; Closer to 1 or -1, stronger
relationship
 Correlation is NOT causation
 May be other variables involved
 There are other types of relationships
that we can find with quant. research,
such as:
Brief Aside  Moderating effects
 X impacts Y, but its impact depends on (is
on moderated by) Z
Relationship  Mediating effects

s  X impacts Z, and then Z impacts Y


 X may or may not impact Y directly

 For now, we’ll just worry about


causation & association
 Experimentation
 “Gold standard”

 2+ groups*

Quantitative  Experimental plus control

Methods -  Recruit -> randomize -> intervention (manipulate IV)

Causation -> test DV

 Causal relationships

 True experiments vs. quasi-experiments


Quant. Methods –Survey
• Strengths:
• Easy to gather a lot of information
• Can ask about subjects that are difficult
to test
• Weaknesses:
• Susceptible to subject bias/expectations
• Susceptible to poor wording of
questions
• Typically correlational
• Causation cannot be found in a survey
 Surveys are a form of observational work

 We can also use quantitative observations in a

Quant.
more traditional sense
 Naturalistic or lab/other setting
 Direct or participant
Methods -
 Similar strengths/weaknesses to surveys:
 Can gather a lot of data on difficult variables
Observati
to manipulate
 Can’t establish causation – other things at
on
play
 **Very susceptible to researcher bias
without a clear protocol
Questions before
we move on?
Let’s talk
measurement
Let’s talk Measurement

Research is empirical – it’s based on observations and measurements

That is, the data for a study are gathered by observing and measuring changes
in variables

In order to measure those variables correctly, we need systematic, accurate


procedures for doing that

Sometimes we can measure things


When we can’t, we need to figure out indirect
directly (e.g., height or weight), but measures
often not!
Let’s talk Measurement
Research is empirical – it’s based on observations and measurements

That is, the data for a study (especially quant.) are gathered by observing and
measuring changes in variables

In order to measure those variables correctly, we need systematic, accurate


procedures for doing that

Sometimes we can measure things


When we can’t, we need to figure out indirect
directly (e.g., height or weight), but measures
often not!
CONSTRUCT
 Internal characteristics of an individual that we
can’t directly measure
 Personality, intelligence, etc.

About those  These help us describe/explain behavior

indirect  They seem introverted, they seem to be very open to


new ideas, etc.

measures… OPERATIONAL DEFINITIONS


 Gives us a unified, shared way we can think about and measure a
construct

 Gives us specific rules for how to define, measure, and test


hypotheses about that construct

 Two people with the same op. def. should consistently agree on
the measure of that construct
Let’s create an operational definition
What is “learning?”
Another Way to Think About This Issue

Are snakes,
How does the
sharks, and
answer to this
crocodiles more
question depend
dangerous than
on operational
cattle, horses,
definitions?
and deer?
Another Way to Think About This Issue

Are snakes,
How does the
sharks, and
answer to this
crocodiles more
question depend
dangerous than
on operational
cattle, horses,
definitions?
and deer?
 Kettler (2019) distinguished between two
major approaches to
measurement/assessment:
 Tests
 “Direct” measure of the construct of interest
OK, so we have a  If we’re interested in intelligence, we can
implement a (valid!) intelligence test
definition – how  Seen as more objective than rating scales,
do we measure because they are more direct
that variable?
 Rating scales
 Rather than scoring directly, a rater indirectly
evaluates the behavior of the individual
 The rater sometimes is the individual being rated
 Many use Likert-type scales, but not all
 Kettler (2019) provides a five-phase process:
1. Planning – establishing operational definitions, etc.

Actually 2.
3.
Development – creating items, reviewing, etc.
Testing (piloting) – piloting the measure with users
developing 4. Completing forms – designing final versions, etc.

measures
5. Sharing results – scoring, documentation, etc.

 We won’t talk through all five of these in detail for the


sake of time, but let me know if you have questions!
 Creswell & Clark (2018) provide a mixed methods
approach to instrument design as research
 Goal is to create a psychometrically-sound
measure of the construct of interest Instrument
 Exploratory sequential design developme
 Collection of initial qualitative data nt as
 Analysis of qualitative data
 Initial instrument design based on qual. sample
research
 Administered/tested with wider sample*
 Update instrument & repeat testing as needed
 Publish/disseminate/etc.
 Planning Guide 2 (P2) is all about getting started
on a research question

 You are welcome to use any of the perspectives


we discussed last week to approach a topic of
Planning interest

Guide 2  Keep in mind that each approach might ask


different questions
 E.g., dropping out of school:
 Positivist: to what extent does dropping out of school
impact income level?
 Critical: what systems make particular groups more
likely to drop out of school, and how can we
address/fix those systems?
 Planning Guide 2 (P2) is all about getting started on a
research question

 Coming up with questions:


 Two major approaches: start with interesting questions

Planning and find methods that fit OR choose a method and find
a question that works with it

Guide 2  Your questions could have a clear/unequivocal answer


OR they can be more interpretive – quant. vs. qual.!
 Be mindful about specific populations or sites where
you could potentially gather data
 Your question should involve some kind of connection
or relationship, but not necessarily correlational or
causal (doesn’t have to be quant.)
Based on your P2
discussions…

What would you like to discuss? What questions do


you have?
 As a group, choose a construct to discuss
 E.g., personality, intelligence, fear, etc. – it can be
connected to school psych or not!

 Create an operational definition for your construct


If we have  What are you really trying to get at? What would tell

time: item you about that construct?

developmen  Try writing some (3-5) questions that could be


t used on either a test OR a rating scale in order to
assess that construct
 Test or rating scale?
 What questions will you ask, and how will they be
answered?
 How does each question address the construct?

You might also like