Types of Statistical Tests by Purpose
Types of Statistical Tests by Purpose
A. Descriptive Statistics
Mean, Median, Mode: Central tendency measures; used to describe the average values in a dataset.
Variance and Standard Deviation: Measures of variability; useful for understanding data spread.
B. Comparative Tests
These tests are used to compare means, proportions, or distributions between two or more groups.
1. t-tests:
o Independent t-test: Compares the means of two independent groups (e.g., male vs. female patient
temperatures). Use when data is normally distributed and variances are equal.
o Paired t-test: Compares means of two related groups (e.g., before-and-after measurements for the
same patients).
o One-sample t-test: Tests if the mean of a single group is different from a known value (e.g.,
comparing patient temperature mean to a standard).
2. ANOVA (Analysis of Variance):
o One-way ANOVA: Compares means across three or more independent groups (e.g., knowledge
levels across multiple education groups). Assumes normality and equal variances.
o Two-way ANOVA: Assesses the effect of two different categorical independent variables on one
continuous dependent variable (e.g., comparing the effects of knowledge and experience on patient
outcomes).
o Repeated Measures ANOVA: Like a paired t-test but for more than two time points or conditions.
3. Chi-Square Test:
o Chi-Square Test for Independence: Used for categorical data to test if two variables are
independent (e.g., is there an association between gender and knowledge level).
o Chi-Square Goodness of Fit Test: Tests if a categorical variable’s observed frequencies match
expected frequencies.
4. Mann-Whitney U Test:
o Used to compare medians between two independent groups when data is not normally distributed
(nonparametric alternative to the independent t-test).
5. Wilcoxon Signed-Rank Test:
o Nonparametric alternative to the paired t-test, comparing two related samples.
6. Kruskal-Wallis Test:
o Nonparametric alternative to one-way ANOVA, comparing medians among three or more
independent groups.
1. Pearson Correlation:
o Measures linear correlation between two continuous variables (e.g., age and temperature); assumes
normality.
2. Spearman’s Rank Correlation:
o Nonparametric correlation that assesses monotonic relationships between two continuous or ordinal
variables.
3. Kendall’s Tau:
o Nonparametric test measuring strength of association between two ordinal variables, especially with
small sample sizes.
4. Chi-Square Test of Association:
o For categorical variables, evaluates if there’s an association between them (e.g., association between
intervention type and patient outcome).
D. Regression Tests
E. Nonparametric Tests
1. Sign Test:
o Compares median of a sample to a known value or compares paired observations without requiring
normality.
2. Friedman Test:
o Nonparametric alternative to repeated measures ANOVA for comparing multiple related groups.
1. Type of Variable(s):
o Continuous, categorical, or ordinal variables impact the choice. For example, continuous outcomes
often use t-tests, ANOVA, or regression.
2. Number of Groups:
o Comparing two groups? Use t-tests or Mann-Whitney U. More than two groups? Consider ANOVA
or Kruskal-Wallis.
3. Normality of Data:
o Normally distributed data allows for parametric tests; if not, use nonparametric alternatives.
4. Paired vs. Independent Samples:
o Paired samples (e.g., pre- and post-tests for the same group) use paired tests like the paired t-test or
Wilcoxon signed-rank, whereas independent samples use independent t-tests or Mann-Whitney U.
5. Study Design:
o Observational studies may benefit from correlation or regression to adjust for confounders, while
experimental studies can use causal tests like t-tests, ANOVA, or logistic regression.