0% found this document useful (0 votes)
157 views30 pages

SPSS

The document discusses various data analysis techniques that can be used in marketing research, focusing on how to use SPSS to analyze data. It covers both univariate and multivariate techniques. For multivariate analysis, the two main classes covered are analysis of difference (e.g. t-test, ANOVA) and analysis of association (e.g. cross-tabulation, correlation, regression). Examples are provided for how to conduct cross-tabulation and t-tests in SPSS.

Uploaded by

himadri.roy8469
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
157 views30 pages

SPSS

The document discusses various data analysis techniques that can be used in marketing research, focusing on how to use SPSS to analyze data. It covers both univariate and multivariate techniques. For multivariate analysis, the two main classes covered are analysis of difference (e.g. t-test, ANOVA) and analysis of association (e.g. cross-tabulation, correlation, regression). Examples are provided for how to conduct cross-tabulation and t-tests in SPSS.

Uploaded by

himadri.roy8469
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

North South University, School of Business MKT 631 Marketing Research Instructor: Mahmood Hussain, PhD

Data Analysis for Marketing Research - Using SPSS


Introduction In this part of the class, we will learn various data analysis techniques that can be used in marketing research. The emphasis in class is on how to use a statistical software(SAS, SPSS, Minitab, SYSTAT, and so on) to analyze the data and how to interpret the results in computer output. The main body of the data analysis part will be devoted to multivariate analysis when we try to analyze two or more variables.

Classification In multivariate data analysis(i.e., when we have two or more variables) we are trying to find the relationship between the variables and understand the nature of the relationship. The choice of the techniques primarily depends on what measurement scales we used for the variables and in which way we are trying to find the relationship between the variables. For example, we can see the relationship between variable x and y, by examining the difference of y at each value of x or by examining the association between the two variables. In class, basically the two classes of data analysis techniques will be taught: analysis of difference and analysis of association.

I. Analysis of Difference Ex. t-test(z-test), Analysis of Variance(ANOVA) II. Analysis of Association Ex. Cross-tabulation, Correlation, Regression Suggestion Before you start any of the data analysis techniques, check: 1. What are the variables in my analysis? 2. Can the variables be classified as dependent and independent variable? 3. If yes, what is (ind)dependent variable? 4. What measurement scales were used for the variables?

North South University, MKT 631, SPSS Notes: Page 1 of 30

Data Analysis I: Cross-tabulation (& Frequency) In cross-tabulation, theres no specific dependent or independent variables. The crosstabulation just analyzes whether the two(or more) variables measured with nominal scales are associated or equivalently, whether the cell counts of the values of one variable are different across the different values(categories) of the other variable. All of the variables are measured by nominal scale. Example 1 Research objective: To see whether the preferred brands(brand A, brand B, and brand C) are associated with the locations(Denver and Salt Lake City); Is there any difference in brand preference between the two locations? Data: We selected 40 people in each city and measured what their preferred brands were. If the preferred brand is A, the favorite brand =1. If the preferred brand is B, the favorite brand =2. If the preferred brand is C, the favorite brand =3. Denver
ID 1 2 3 4 5 6 7 8 9 10 favorite brand 1 3 3 1 3 2 3 3 3 1 ID 11 12 13 14 15 16 17 18 19 20 favorite brand 1 2 2 2 3 3 1 3 1 2 ID 21 22 23 24 25 26 27 28 29 30 favorite brand 1 3 3 3 1 3 3 3 3 3 ID 31 32 33 34 35 36 37 38 39 40 favorite brand 3 3 2 3 1 3 3 3 2 1

Salt Lake City


ID 1 2 3 4 5 6 7 8 9 10 favorite brand 2 1 2 1 2 1 2 1 2 1 ID 11 12 13 14 15 16 17 18 19 20 favorite brand 1 1 1 1 2 2 1 2 1 2 ID 21 22 23 24 25 26 27 28 29 30 favorite brand 3 3 2 3 2 1 2 3 3 2 ID 31 32 33 34 35 36 37 38 39 40 favorite brand 2 2 1 2 1 1 1 1 1 1

Procedures in SPSS 1. Open SPSS. See upper-left corner. You see SPSS Data Editor. 2. Type in new the data(Try to save it!). Each column represents each variable. Or, if you already have the data file, just open it.(data_crosstab_1.sav) 3. Go to top menu bar and click on Analyze. Then, now youve found Descriptive Statistics. Move your cursor to Descriptive Statistics 4. Right next to Descriptive Statistics, click on Crosstabs. Then you will see:
Crosstabs Row(s): Location Brand Column(s): OK PASTE RESET CANCEL HELP Layer 1 of 1

Display clustered bar charts Suppress tables Statistics Cells Format

These are the variable names and can be changed for different dataset.

5. Move your cursor to the first variable in upper-left corner box and click. In this example, its Location. Click to the button just left to Row(s): box. This will move Location to the Row(s): box. In the same way, move the other variable Brand to Column(s): box. You have:

Crosstabs Row(s): Location Column(s): Brand OK PASTE RESET CANCEL HELP Layer 1 of 1

Display clustered bar charts Suppress tables Statistics Cells Format

6. Go to Statistics in the bottom and click. You will see another menu screen is up. In that menu box, click on Chi-square and Contingency coefficient as follows: Chi-square Correlations

Nominal Contingency coefficient Phi and Cramers V Lambda Uncertainty coefficient

and click on Continue on right-upper corner. 7. Youve been forwarded to the original menu box. Click on OK in right-upper corner.

8. Now youve got the final results at Output- SPSS Viewer window as follows:

Crosstabs
Case Processing Summary Cases Missing N Percent 0 .0%

Valid N LOCATION * BRAND 80 Percent 100.0%

Total N 80 Percent 100.0%

BRAND * LOCATION Crosstabulation Count LOCATION 1 BRAND 1 2 3 10 7 23 40 2 19 16 5 40 Total 29 23 28 80

Total

Chi-Square Tests Asymp. Sig. (2-sided) .000 .000

Pearson Chi-Square Likelihood Ratio N of Valid Cases

Value 17.886a 18.997 80

df 2 2

a. 0 cells (.0%) have expected count less than 5. The minimum expected count is 11.50.
Symmetric Measures

Nominal by Nominal N of Valid Cases

Contingency Coefficient

Value .427 80

Approx. Sig. .000

a. Not assuming the null hypothesis. b. Using the asymptotic standard error assuming the null hypothesis.

Interpretation First table simply shows that there are 40 observations in the data set. Recall the dataset again. There were 20 observations in each city.

From the second table, it is found that among 40 people in Denver(location=1), 10, 7, and 23 people prefer the brand A, B, and C, respectively. On the other hand, 19, 16, and 5 out of 40 people in Salt Lake City (location=2) like brand A, B, and C, respectively. From the third table, the chi-square value is 17.89(Chi-Square = 17.886) and the associated p-value for this chi-square value is is 0.00(Asymp. Sig. = 0.000), which is less than 0.05. Therefore, we conclude that people in different city prefer the different brand or that consumers favorite brand is associated with where they live in. (This conclusion is also supported by the final table that shows the contingency coefficient is 0.427.) Looking at the column totals or row totals in the second table, we can also get the frequency for each variable. In this example, among all 80 people in my sample, 29, 23, and 28 people said that their most favorite brands are A, B, and C, respectively. And 40 people are selected from Denver (location=1) and the other 40 are from Salt Lake City (location=2).

Example 2 49ers vs Packers Research objective: To see whether winning (or losing) the basketball game is associated with whether the game is home-game or away-game.

Data: We selected the data of the basketball game between 49ers and Packers for the period between 19XX and 19&&

Game at (1=SF, 2=GB) 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2

Game results (1=49ers win, 2=49ers lose) 1 1 1 2 1 2 1 2 1 1 2 2 1 2 1 2 1 1 2 1 1 2 1 2 1 2 2 2 1 2

Results
GAME_AT * RESULT Crosstabulation Count RESULT 1 GAME_AT 1 2 Total 12 4 16 2 3 11 14
Chi-Square Tests Asymp. Sig. (2-sided) .003 .010 .003 Exact Sig. (2-sided) Exact Sig. (1-sided)

Total 15 15 30

Pearson Chi-Square Continuity Correctiona Likelihood Ratio Fisher's Exact Test Linear-by-Linear Association N of Valid Cases

Value 8.571b 6.563 9.046

df 1 1 1

.009 8.286 30 1 .004

.005

a. Computed only for a 2x2 table b. 0 cells (.0%) have expected count less than 5. The minimum expected count is 7.00. Symmetric Measures

Nominal by Nominal N of Valid Cases

Contingency Coefficient

Value .471 30

Approx. Sig. .003

a. Not assuming the null hypothesis. b. Using the asymptotic standard error assuming the null hypothesis.

Interpretation Among total 30 games, 49ers won 16 games and lost 14 games. Each teams tends to win the game when the football game is home-game for each team: Among 16 49ers victories, 12 games were played at SF. Among 14 Packers victories, 11 games were played at GB. The chi-square for this table is 8.571(Pearson Chi-Square = 8.571) and the p-value is 0.003 (Asymp. Sig. = 0.003). Since p-value = 0.003 is less than 0.05(we typically assume that our pre-specified level of significance is 0.05) and contingency coefficient is far from 0(.471), we conclude that the likelihood that a school wins the football game is associated with whether the game is home-game or away-game for the school.

Data Analysis II: t-test(or z-test)


Dependent variable: metric scale(interval or ratio scale) Independent variable: nonmetric scale(nominal) T-test(or Z-test) tests whether theres a significant difference in dependent variable between two groups(categorized by independent variable). Example Research objective: To see whether the average sales of brand A are different between when they offer a price promotion and when they do not. Data: We measured sales of brand A when we offer promotion(promotion =1) and regular price(promotion = 2). Promotion 1 1 1 1 1 1 1 1 1 1 Sales 75 87 83 45 95 89 74 110 75 84 Promotion 2 2 2 2 2 2 2 2 2 2 Sales 51 70 37 62 90 72 45 78 45 76

Dependent variable: Sales Grouping variable: Promotion(1 = Yes, 2 = No) Procedures in SPSS 1. Open SPSS. See upper-left corner. You see in SPSS Data Editor. 2. Type in new the data(Try to save it!). Each column represents each variable. Or, if you already have the data file, just open it.(data_t-test_1.sav) 3. Go to top menu bar and click on Analyze. Then, now youve found Compare Means. Move your cursor to Compare Means 4. Right next to Compare Means, click on Independent Samples T-Test. Then you will see:

Independent Samples T-Test Test Variable(s): Sales Promo OK PASTE RESET CANCEL HELP Grouping variable:

Define Groups Options

These are the variable names and can be changed for different dataset. 5. Move your cursor to the dependent variable in left-hand side box and click. In this example, its Sales. Click to the button just left to Test Variable(s): box. This will move Sales to the Test Variable(s): box. Then move the independent variable(grouping variable) to Grouping variable:. In this example, its Promo. Click on Define Groups, and specify what numbers are for what group in your data by typing the numbers into the boxes. Group 1: Group 2: 1 Type these numbers. 2

Then, click on Continue . You have:


Independent Samples T-Test Test Variable(s): Sales OK PASTE RESET CANCEL HELP Grouping variable: Promo (1 2) Define Groups Options

6. Click on OK

in right-upper corner.

7. Now youve got the final results at Output- SPSS Viewer window as follows:

Results

T-Test
Group Statistics Std. Error Mean 5.336 5.498

SALES

PROMO 1 2

N 10 10

Mean 81.70 62.60

Std. Deviation 16.872 17.386

Independent Samples Test Levene's Test for Equality of Variances

t-test for Equality of Means 95% Confidence Interval of the Difference Lower Upper 3.004 3.003 35.196 35.197

F SALES Equal variances assumed Equal variances not assumed .458

Sig. .507

t 2.493 2.493

df 18 17.984

Sig. (2-tailed) .023 .023

Mean Difference 19.10 19.10

Std. Error Difference 7.661 7.661

Interpretation From the first table, the mean sales under the promotion and non-promotion are 81.7 and 62.6, respectively. The test statistic, t, for this observed difference is 2.49(t= 2.493). The p-value for this t-statistic is 0.023(Sig.(2-tailed)=0.023). Since p-value (0.023) is less than 0.05, we reject the null hypothesis and conclude that theres a significant difference in average sales between when firms offer price promotion and when they offer just regular prices.

Data Analysis II: Analysis of Variance (One-way ANOVA)


Dependent variable: metric scale(interval or ratio scale) Independent variable: nonmetric scale(nominal) ANOVA(F-test) tests whether theres a significant difference in dependent variable among multiple groups(categorized by independent variable). Example Research objective: There are three training programs of sales person. We want to see whether the different job training methods for the employees result in different level of job satisfaction. Data: We trained 15 rookies in the company with different program. Each training method is applied to 5 people. ID # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
a

Trained by a 1 1 2 2 3 1 2 3 2 2 1 1 3 3 3

Job satisfaction level b 5 2 3 4 5 6 2 5 2 3 5 7 4 4 5

Grouping variable: training program(1= trained by program #1, 2= trained by program #2, 3 = trained by program #3) b Dependent variable: Job satisfaction (1: Strongly dissatisfied --- 7: strongly satisfied) Procedures in SPSS 1. Open SPSS. See upper-left corner. You see in SPSS Data Editor. 2. Type in new the data(Try to save it!). Each column represents each variable. Or, if you already have the data file, just open it.(data_anova_1.sav)

3. Go to top menu bar and click on Analyze. Then, now youve found Compare Means. Move your cursor to Compare Means. 4. Right next to Compare Means, click on One-way ANOVA. Then you will see:
One-Way ANOVA Dependent List:: Program Satis OK PASTE RESET CANCEL HELP Factor:

Contrasts

Post Hoc

Options

These are the variable names and can be changed for different dataset. 5. Move your cursor to the dependent variable in left-hand side box and click. In this example, its Satis. Click to the button just left to Dependent List: box. This will move Satis to the Dependent List: box. Then move the independent variable(grouping variable) to Factor:. In this example, its Program. 6. Click on Options . A new sub-menu box will appear. Then, click on the following boxes: Statistics Descriptive Fixed and random effects Homogeneity of Variance test ........

and click on 7. Click on

Continue OK

in right-upper corner.

in right-upper corner.

8. Now youve got the final results at Output- SPSS Viewer window as follows:

Results
Descriptives SATIS 95% Confidence Interval for Mean Lower Bound Upper Bound 2.68 7.32 1.76 3.84 3.92 5.28 3.30 4.97

N 1 2 3 Total 5 5 5 15

Mean 5.00 2.80 4.60 4.13

Std. Deviation 1.871 .837 .548 1.506


ANOVA

Std. Error .837 .374 .245 .389

Minimum 2 2 4 2

Maximum 7 4 5 7

SATIS Sum of Squares 13.733 18.000 31.733 df 2 12 14 Mean Square 6.867 1.500 F 4.578 Sig. .033

Between Groups Within Groups Total

Interpretation (1) From the first table, the mean satisfaction of the subjects in group 1(i.e., those who were trained by program 1), group 2, and group 3 are 5.0, 2.8, and 4.6, respectively. (2) Checking ANOVA table, the second output:
Source Between Within Total SS DF MS F Sig.

SSG SSE dfT

dfG dfE SST

MSG MSE

p-value

31.73 = 13.73 + 18.00 SST = SSG + SSE dfT = number of respondents 1 = 15 1 = 14 dfG = number of groups 1 = 3 1 = 2 dfE = dfT dfG = 14 2 = 12 MSG = SSG/dfG MSE = SSE/dfE F = MSG/MSE (Test statistic) 6.87 = 13.73 / 2 1.50 = 18.00 / 12 4.58 = 6.87 / 1.50