0% found this document useful (0 votes)
48 views

GSCA Pro 1.1 User Manual

The document provides instructions for using the GSCA Pro 1.1 software. It describes how to download and open the software, explains the graphical user interface, and provides information on preparing data, specifying models, and conducting analyses.

Uploaded by

Adrie Oktavio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

GSCA Pro 1.1 User Manual

The document provides instructions for using the GSCA Pro 1.1 software. It describes how to download and open the software, explains the graphical user interface, and provides information on preparing data, specifying models, and conducting analyses.

Uploaded by

Adrie Oktavio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

GSCA Pro 1.

1 User’s Manual

Heungsun Hwang

McGill University, Montreal, Canada

Gyeongcheol Cho

McGill University, Montreal, Canada

Hosung Choo

Kwangwoon University, Seoul, Korea

Last updated June 25, 2021

DOI: 10.13140/RG.2.2.28162.61127

Website: www.gscapro.com

Facebook: https://www.facebook.com/GSCAPro

Twitter: https://twitter.com/GSCAPro

Google Discussion Group: https://groups.google.com/g/gsca-pro


Table of Contents

I. Downloading and Opening GSCA Pro 1.1 p.2


II. GSCA Pro Graphical Interface p.3
III. General Information p.5
IV. Pre-Analysis - Descriptive Statistics p.9
V. Analysis
1. Basic GSCA: Single Group Analysis p.10
2. Basic GSCA with Constrained Parameters p.21
3. Basic GSCA: Multigroup Analysis p.23
4. GSCA with 2nd-order Components p.27
5. Nonlinear GSCA p.30
6. GSCA with Component Interactions p.31
7. Multilevel GSCA p.33
8. Regularized GSCA p.34
9. Integrated GSCA p.35
VI. Post Analysis
1. Model Comparison p.36
2. Moderation Analysis p.39
3. Conditional Process Analysis p.41
VII. Preference – Estimation Options p.43
VIII. References p.45

1
Downloading and Opening GSCA Pro

Users download GSCA Pro 1.1.zip from www.gscapro.com and unzip it. Please download the
software only from this website.

Then, double-click on GSCA Pro 1.1.exe to open the software.

Important Notices

 When double-clicking on GSCA Pro 1.1.exe,


Windows 10 users will receive the “Windows
protected your PC” warning message, as shown
on the right. Here is what they need to do:

o Don’t click on the “Don’t run” option


o Click on the “More info” option
o A new popup window will then appear
o Click on “Run anyway”

 An antivirus program on users’ PC can prevent GSCA Pro 1.1 from running. Then, users
need to temporarily disable their antivirus program, or add GSCA Pro 1.1 to the antivirus
program’s Trusted Program List. Below is how to add a trusted program to several
antivirus programs.

o Trend Micro: https://docs.trendmicro.com/all/ent/officescan/v11.1/en-us/osce_11.1_sp1_agent_olh/Trusted-


Program-List.html
o McAfee: https://community.mcafee.com/t5/SecurityCenter/How-to-add-programs-to-the-Trusted-List/td-p/72462
o Norton: https://www.providesupport.com/help/troubleshooting/norton-internet-security

 Windows 7 users may receive the “Windows cannot access the specified device, path,
or file” warning message. Then, please refer to the following link: https://support.microsoft.com/en-
us/topic/-windows-cannot-access-the-specified-device-path-or-file-error-when-you-try-to-install-update-or-start-a-program-or-file-
46361133-47ed-6967-c13e-e75d3cc29657

 We do not recommend creating GSCA Pro’s folder within an existing folder whose
contents are synchronized by a cloud storage service (e.g., Dropbox’s folder). This may
interrupt the execution of the software or turn it off suddenly.

2
GSCA Pro’s Graphical Interface

Upon opening GSCA Pro, the following graphical interface will appear.

(1)

(2)
(6)

(7)

(5)

(3)
(4)

(1) Menu Bar contains top-level menus, including [File], [Analysis], [Pre-Analysis], [Post-
Analysis], and [Help].
- In [File], users can create a new project, open an old project, save a current
project, save a project as a different file, or exit the program.
- In [Analysis], users can select various analytic features of GSCA.
- In [Pre-Analysis], users can calculate descriptive statistics.
- In [Post-Analysis], users can conduct a supplementary analysis after fitting
models, including model comparison, mediation analysis, or conditional process
analysis.
- In [Help], users can find information on the program, developers, or citation.

(2) Shortcuts
- [New Project] is used to create a new project.
- [Open Project] is used to open an existing project.
- [Save Project] is used to save a current project.
- [Run] is used to fit a specified model to data.
- [Run All] is used to fit all specified models to the same data at once.

3
- [Preference] is used for users to choose various estimation options (e.g.,
maximum number of iterations, number of bootstrap samples, missing data
options, etc.)

(3) View Tap


- [Data] displays the data that users uploaded into GSCA Pro.
- [Model] is used to create or revise a model.
- [Result] displays results after estimating a model.

(4) Status Bar displays the names of the project and the analysis type that users currently
select and conduct.

(5) Tool Panel contains all tools that users can use for specifying a model or conducting an
analysis.

(6) Main Window displays users’ data, models, or analysis results.

(7) Model Bar shows a list of models that users specified. Users can add (+), delete (-),
move up (▲), or move down (▼) a model.

Timer
When an analysis runs longer than 5 seconds, the below timer will appear, displaying how many
bootstrap samples have been run and how much time the analysis will take to complete.

4
General Information

1. How to Prepare Data for GSCA Pro


GSCA Pro is run on individual-level raw data. The raw data file can be prepared in various
formats (.txt, .csv, or .xlsx). The specific data format for GSCA Pro is as follows:

 The first row can contain the names of indicators. The name of each indicator should be
separated by a space, tab, comma, semicolon, or column (.xlsx). Refer to the example data
files (tutorial_data.txt, tutorial_data.csv, and tutorial_data.xls).
 If the first row does not contain the names of indicators, by default, the indicators will be
named V1, V2, …, and VJ (J is the number of indicators).
 The data input begins on the second row. Data from an observation, or responses by an
individual on each indicator, should be separated by a space (.txt), a comma (.csv), or a
column (.xlsx).
 Data for each observation appear on a single row.
 Data must not include nonnumeric characters or blank cells.
 Data may include missing values. Any numeric value can be used to indicate missing values
(the default value is -9999) and should be used consistently in the data.

Exemplary Data and Model


Part of Bergami and Bagozzi’s (2000) organizational identification data is used for illustrative
purposes. The number of observations is 305. Figure 1 displays the model specified for the data.
This model includes 4 components (hexagons) and 21 indicators (boxes): Organizational
Prestige (OP) is associated with 8 indicators (cei1 – cei8), Organizational Identification (OI) with
6 indicators (ma1 – ma6), Affective Commitment-Joy (AC Joy) with 4 indicators (orgcmt1, 2, 3
and 7), and Affective Commitment – Love (ACLove) with 3 indicators (orgcmt5, 6, and 8).

Figure 1. The specified model for the example data.

5
2. How to Start New Project or Open Old Project
 To start a new project, click on the [New Project] shortcut. Users name a new project,
upload a data file, set a directory for saving the project, and indicate whether the names of
indicators appear in the first row of the data file (default). Then, click on [OK].

 To open an old project, click on the [Open Project] shortcut, search the directory that
contains a project file of interest, click on the file, and click on [Open].

3. How to Check Uploaded Data


 To review uploaded data, click on the [Data] tap.
 Users can increase or decrease cell size by clicking on [Zoom in] or [Zoom Out] on the left-
hand window.
 Users can check the number of missing values per observation or variable by clicking on
[Check Missing Values] on the left-hand window.

6
4. How to Choose Analysis Type
 Users can choose an analysis type in [Analysis] on the Menu Bar.

5. How to Specify Model


 Users can specify a model on the Main Window. When they have viewed their data or
analysis results, they can click on the [Model] tap to specify or view a model.
 Users can specify a model with tools in the Tool Panel. A detailed description of model
specification is provided in the following chapters.

6. How to Fit Model


 Click on the [Run] shortcut to fit a specified model to the uploaded data.

7
7. How to View Analysis Result
 GSCA Pro automatically displays analysis results after fitting a model. Users can also click
on the [Result] tab to view the results.

8
Pre-Analysis – Descritive Statistics

 To calculate descriptive statistics for the data, select “Pre-Analysis  Descriptive


Statistics”
o Select variables on the left-hand window and move them to the upper right-hand
window labelled “Variables.”
o Choose which descriptive statistics are calculated for the selected variables.
o Users can calculate descriptive statistics for the variables in different groups by
moving a grouping variable (e.g., gender) to the the lower right-hand window labelled
“Split by.”
o Users can export the calculated descriptive statistics in csv format by clicking on the
icon [Export] on the bottom.

9
Analysis – Basic GSCA: Single Group Analysis

 To begin, select “Analysis  Basic GSCA  single group” under the [Analysis] menu.

1. Specify a structural equation model


Users can specify their structural equation model with the following steps.

Step 1: Draw components


Users are to draw components before assigning variables them as follows:
 Click once on [Add Component] in the Tool Panel.
 Click the left mouse button with the cursor placed in the Main Window as many times as the
number of components. In the present example, four clicks resulted in the creation of four
components. By default, the four components were initially named new1 to new4.

Step 2: Assign indicators to components (measurement model)


After drawing components, users are to specify their measurement model as follows:
 Double-click on an individual component (a hexagon). Then, the “Assign Indicators to
Constructs” window will appear.

10
 In the “Assign Indicators to Constructs” window,
o Users can rename the component by typing a new name.
o User select the appropriate indicators in the list, which appears on the left-hand dialog
window, and move them to the right-hand dialog window (“Free” means a free loading to
be estimated).
o Optionally,
 Users can choose whether the component is specified as a canonical component,
often known as a formative component, which does not involve loadings.
 If users want to align the sign of each indicator’s weight with the correlation of a
certain indicator (a sign-fixing indicator) with the component, they can indicate
which indicator is used as the sign-fixing indicator.
 Users can constrain certain loadings to be equal (Equality Constraints) or to a
constant (User-Defined Constraints). Refer to page 18.
o Click on OK.

 Repeat the above steps for the remaining components.


Step 3: Draw path coefficients (structural model)
Path coefficients are to be drawn as follows:
 Click once on [Add Path] in the Tool Panel.
 Drag a path from an independent component to the corresponding dependent component.
Repeat the above steps until all paths are drawn.

11
2. Run GSCA Pro
 Once the above steps are complete, users can run GSCA Pro for fitting the specified model to
the data. This is done by clicking on the [Run] shortcut.

3. View and Interpret Basic Results


 When the program is finished running, basic analysis results are displayed in the “Result”
tab.

 The basic analysis results of our example are below.

============================================================
Model Number : 1

12
Analysis Type : Basic / Single group
Execution Date : Wed Feb 24 14:13:05 2021
Number of bootstrap samples : 100

The ALS algorithm converged in 4 iterations (convergence criterion = 0.0001)

Elapsed time for original sample: 0 minute(s) 0.01 second(s)


Average elapsed time per bootstrap sample: 0 minute(s) 0.00 second(s)
Total elapsed time: 0 minute(s) 0.20 second(s)
============================================================

Model fit measures


FIT AFIT FITs FITm GFI SRMR OPE OPEs OPEm
0.535 0.532 0.168 0.606 0.985 0.048 0.466 0.845 0.394

The above table provides various model fit measures in GSCA.

 FIT indicates the total variance of all variables (indicators and components) explained by a
particular model specification. Like R squared in linear regression, the values of FIT range
from 0 to 1. The larger this value, the more variance in the variables is accounted for by the
specified model. For example, FIT = .50 indicates that 50% of the total variance of all
variables is explained by the model. There is no rule of thumb cutoff for FIT, which is
indicative of an acceptable fit.
 AFIT (Adjusted FIT) is similar to FIT but takes model complexity into account. Like
Adjusted R-squared in linear regression, AFIT cannot be interpreted in the same way as FIT
(i.e., the proportion of the total variance explained). Instead, it can be used only for
comparing competing models. The model with the largest AFIT value may be chosen among
competing models.
 FITS indicates the total variance of all components explained by a particular structural
model specification. The values of FITS range from 0 to 1. The larger this value, the more
variance in the components is accounted for by the specified structural model.
 FITM indicates the total variance of all indicators explained by a particular measurement
model specification. The values of FITM range from 0 to 1. The larger this value, the more
variance in the indicators is accounted for by the specified measurement model.
 GFI (goodness-of-fit index) and SRMR (standardized root mean squared residual). Both are
proportional to the difference between the sample covariances and the covariances
reproduced by the parameter estimates of GSCA. A recent study suggested the following
rules-of-thumb cutoff criteria for GFI and SRMR in GSCA (Cho, Hwang, Sarstedt, &
Ringle, 2020)

o When sample size = 100, a GFI ≥ .89 and an SRMR ≤ .09 indicate an acceptable fit.
Although both indexes can be used to assess model fit, using the SRMR with the
above cutoff value may be better than using the GFI with the suggested cutoff

13
value. Also, if SRMR ≤ .09, then a GFI cutoff value of ≥ .85 may still be indicative
of an acceptable fit.
o When sample size > 100, a GFI ≥ .93 or an SRMR ≤ .08 indicates an acceptable fit.
In this case, there is no preference for one index over the other, or for using a
combination of the indexes over using them separately. Each index’s suggested
cutoff value may be used independently to assess the model fit.

 OPE (out-of-sample prediction error) indicates prediction power of a specified model for
unseen observations (Cho, Jung, & Hwang, 2019). The OPE can be used for comparing
different models in terms of prediction power.
 OPES indicates prediction power of a specified structural model and OPEM indicates
prediction power of a specified measurement model.

The above two tables display the estimates of component weights and component loadings of
indicators per component. They also show the bootstrap standard errors (SE) and bootstrap 95%
confidence intervals (95% CI) of the weight and loading estimates. The 95% confidence intervals

14
can be used for testing the significance of an estimate (i.e., an estimate may be considered
statistically significant at .05 level, if its confidence interval does not include 0). When a
component is specified as a canonical component, its indicators’ loadings will not be reported.
Note that GSCA Pro does not provide a t-test (Estimate/SE) and its p value because this test is a
parametric test assuming the normality of a parameter estimate. Such a parametric test is not
consistent with GSCA which typically does not require a distributional assumption. No literature
is available that shows that GSCA’s estimates are normally distributed.

Path coefficients
Estimate SE 95%CI
OP→OI 0.362 0.059 0.234 0.476
OI→AC_Joy 0.614 0.035 0.559 0.686
OI→AC_Love -0.404 0.051 -0.515 -0.307
This table shows the estimates of path coefficients and their bootstrap standard errors (SE) and
95% confidence intervals (95% CI).

Component correlations
OP OI AC_Joy AC_Love
OP 1.1 0.362 0.388 -0.209
OI 0.362 1.1 0.614 -0.404
AC_Joy 0.388 0.614 1.1 -0.461
AC_Love -0.209 -0.404 -0.461 1.1
This table shows the correlations among components.

4. View and Interpret Full Results


 To view more detailed results, click on [View Full Result] on the left-hand window. Then,
more detailed analysis results are displayed.

 Below are the additional results that are not displayed in the “Result” tab.

15
HTMT
OP ↔ OI 0.409
OP ↔ AC_Joy 0.467
OP ↔ AC_Love 0.26
OI ↔ AC_Joy 0.753
OI ↔ AC_Love 0.527
AC_Joy ↔ AC_Love 0.641
This table shows the heterotrait-monotrait (HTMT) ratio per pair of components, which is
defined as the mean value of the item correlations across constructs relative to the (geometric)
mean of the average correlations for the items measuring the same construct. Discriminant
validity problems are present when HTMT values are high. Henseler et al. (2015) propose a
threshold value of 0.90 for structural models with constructs that are conceptually very similar.
In such a setting, an HTMT value above 0.90 would suggest that discriminant validity is not
present. But when constructs are conceptually more distinct, a lower, more conservative,
threshold value is suggested, such as 0.85 (Henseler, Ringle, & Sarstedt, 2015).
Rönkkö and Cho (2020) show that the HTMT ratio is based on the parallel assumption of
each block of indicators, i.e., the variances of the indicators are the same and the covariances of
indicators are the same. The parallel assumption is not made in GSCA and hardly met in
practice. Thus, in general, we do not recommend relying on the HTMT ratios for assessing
unidimensionality or discriminant validity in GSCA.

Construct quality measures


OP OI AC_Joy AC_Love
PVE 0.641 0.581 0.589 0.583
Alpha 0.92 0.854 0.766 0.642
Rho 0.934 0.892 0.851 0.807
Dimensionality 1.1 1.1 1.1 1.1
The PVE (Proportion of Variance Explained) is the average amount of the total variance of
indicators that is explained by their corresponding component, as in principal components
analysis. If a single component explains 70% or higher of the total variance of a block of
indicators, this may be indicative of unidimensionality for the block (Jolliffe & Cadman, 2016).
The Alpha indicates Cronbach’s alpha. The Rho is Dillon-Goldstein’s rho or the composite
reliability. Note that although Cronbach’s alpha is used for assessing the reliability of sum
scores, this metric assumes the equal covariances of a block of indicators, i.e., tau-equivalence
(Benitez et al., 2020), which is not made in GSCA. Dillon-Goldstein’s rho should be calculated
based on factor loadings rather than component loadings (Benitez et al., 2020). Thus, this metric
is not suitable for GSCA. Instead, the Rho can be used for factors when applying IGSCA. The
Dimensionality indicates the number of eigenvalues greater than 1 for a set of indicators per
component. If Dimensionality > 1, more than one component may be considered for a set of
indicators.

Fornell-Larcker criterion values


OP OI AC_Joy AC_Love

16
OP 0.8 0.0 0.0 0.0
OI 0.362 0.762 0.0 0.0
AC_Joy 0.388 0.614 0.767 0.0
AC_Love -0.209 -0.404 -0.461 0.763
Fornell and Larcker (1981) proposed the traditional metric and suggested that each factor’s AVE
(average variance extracted) should be compared to the squared inter-factor correlation (as a
measure of shared variance) of that same factor and all other factors in the structural model. The
shared variance for all factors should not be larger than their AVEs. However, AVE and the
Fornell-Larcker criterion do not apply to GSCA because they are calculated based on factor
loadings (Benitez et al., 2020). Instead, these metrics may be used for factors when applying
IGSCA.

R squared values of indicators in measurement model


cei1 cei2 cei3 cei4 cei5 cei6 cei7 cei8 ma1 ma2 ma3 ma4 ma5 ma6 orgcmt1 orgcmt2 orgcmt3 orgcmt7 orgcmt5 orgcmt6 orgcmt8
0.609 0.68 0.593 0.646 0.642 0.711 0.603 0.642 0.619 0.575 0.405 0.678 0.657 0.552 0.559 0.624 0.672 0.5 0.633 0.504 0.61

This table shows how much variance of each indicator is explained by the indicator’s
component. When a canonical component is chosen for a set of indicators, the indicators’ R
squared values are not provided.

R squared values of components in structural model


OP OI AC_Joy AC_Love
0.0 0.131 0.377 0.163
This table shows how much variance of each component is explained by its independent
components. When a component is exogenous (e.g., OP in the present example), its R squared
value is equal to zero.

GSCA Pro also provides the variance inflation factor (VIF) for the structural model if any
component is affected by more than one component. Although no clear rule of thumb is
available, a VIF value greater than 5 (Hair, Ringle, & Sarstedt, 2011) or 10 (Myers, 1990, p. 369)
has often been taken as evidence to raise some concern. Also, if the measurement model contains
a “canonical component”, the VIF values of its corresponding indicators are calculated.

F squared values
OP OI AC_Joy AC_Love
OP 0.0 0.15 0.0 0.0
OI 0.0 0.0 0.604 0.195
AC_Joy 0.0 0.0 0.0 0.0
AC_Love 0.0 0.0 0.0 0.0
This table shows the f effect size of each predictor component. As a rule of thumb, the f2 values
2

of 0.02, 0.15 and 0.35 may be considered small, medium, and large effect sizes, respectively
(Cohen, 1988).

17
Unstandardized component means
OP OI AC_Joy AC_Love
4.078 3.663 3.164 2.79
This table shows the averages of unstandardized components in the same scales as their original
indicators.

Unstandardized component variances


OP OI AC_Joy AC_Love
0.411 0.427 0.409 0.377
This table shows the variances of unstandardized components in the same scales as their original
indicators.

Sample correlations (lower diagonal) & Residual correlations (upper diagonal)


cei1 cei2 cei3 cei4 cei5 cei6 cei7 cei8 ma1 ma2 ma3 ma4 ma5 ma6 orgcmt1 orgcmt2 orgcmt3 orgcmt7 orgcmt5 orgcmt6 orgcmt8

cei1 0.0 -0.0 -0.317 0.035 0.234 -0.35 -0.46 -0.137 0.091 0.139 0.103 -0.282 -0.069 0.037 -0.012 -0.047 -0.017 0.072 0.143 -0.035 -0.102

cei2 0.644 0.0 0.21 -0.225 -0.366 -0.129 -0.25 -0.226 0.048 -0.025 0.052 0.045 -0.093 -0.037 0.097 -0.066 -0.147 0.114 0.03 -0.161 0.123

cei3 0.475 0.711 0.0 -0.193 -0.507 -0.147 0.147 -0.26 -0.028 -0.127 -0.053 0.049 0.073 0.095 0.12 -0.019 -0.02 -0.075 -0.017 -0.031 0.045

cei4 0.64 0.587 0.545 0.0 0.275 -0.244 -0.43 -0.16 -0.068 0.054 0.006 0.061 -0.091 0.037 -0.071 0.001 0.029 0.037 -0.134 0.08 0.051

cei5 0.713 0.537 0.423 0.742 0.0 -0.065 -0.402 -0.162 -0.038 0.052 -0.003 -0.14 -0.033 0.193 -0.14 -0.027 0.163 -0.002 0.124 0.043 -0.157

cei6 0.54 0.656 0.598 0.599 0.655 0.0 0.181 -0.224 -0.083 -0.072 0.031 0.094 -0.023 0.047 -0.039 0.076 0.0 -0.035 0.034 0.022 -0.052

cei7 0.425 0.551 0.657 0.463 0.471 0.716 0.0 0.179 -0.036 -0.004 -0.066 0.104 0.135 -0.153 0.033 0.094 -0.023 -0.096 -0.105 0.019 0.082

cei8 0.574 0.584 0.517 0.587 0.584 0.603 0.689 0.0 0.112 -0.011 -0.063 0.082 0.087 -0.23 0.007 -0.01 0.008 -0.005 -0.085 0.065 0.019

ma1 0.278 0.294 0.272 0.204 0.251 0.272 0.257 0.293 0.0 -0.052 -0.226 -0.41 -0.22 -0.023 0.159 -0.093 -0.073 0.01 -0.009 0.016 -0.007

ma2 0.273 0.238 0.202 0.223 0.256 0.243 0.242 0.219 0.576 0.0 -0.154 -0.313 -0.305 -0.153 -0.103 0.061 -0.15 0.185 0.053 -0.038 -0.014

ma3 0.239 0.239 0.196 0.18 0.205 0.247 0.18 0.167 0.394 0.405 0.0 -0.229 -0.363 -0.114 -0.034 -0.014 0.013 0.032 0.026 -0.037 0.01

ma4 0.073 0.219 0.235 0.177 0.147 0.255 0.241 0.208 0.504 0.508 0.424 0.0 0.147 -0.289 0.032 0.022 0.087 -0.135 -0.068 -0.02 0.083

ma5 0.182 0.209 0.277 0.16 0.217 0.255 0.287 0.246 0.559 0.498 0.352 0.716 0.0 -0.281 0.035 0.005 0.057 -0.093 -0.039 0.043 -0.003

ma6 0.188 0.188 0.252 0.173 0.269 0.239 0.136 0.087 0.575 0.496 0.414 0.502 0.492 0.0 -0.096 0.017 0.063 0.012 0.044 0.047 -0.086

orgcmt1 0.232 0.266 0.281 0.255 0.215 0.256 0.261 0.226 0.371 0.271 0.275 0.394 0.348 0.258 0.0 -0.237 -0.366 -0.356 -0.103 0.025 0.074

orgcmt2 0.215 0.201 0.219 0.282 0.259 0.292 0.281 0.215 0.307 0.376 0.318 0.431 0.374 0.343 0.495 0.0 -0.341 -0.389 0.015 -0.022 0.006

orgcmt3 0.182 0.128 0.174 0.246 0.279 0.219 0.191 0.174 0.362 0.342 0.369 0.502 0.44 0.404 0.474 0.528 0.0 -0.309 -0.061 0.082 -0.02

orgcmt7 0.288 0.297 0.216 0.317 0.288 0.277 0.223 0.243 0.328 0.418 0.322 0.343 0.313 0.323 0.362 0.39 0.455 0.0 0.141 -0.083 -0.055

orgcmt5 -0.039 -0.114 -0.162 -0.201 -0.075 -0.166 -0.16 -0.14 -0.214 -0.178 -0.179 -0.352 -0.328 -0.147 -0.369 -0.321 -0.393 -0.185 0.0 -0.437 -0.533

orgcmt6 -0.016 -0.088 -0.071 -0.018 -0.004 -0.061 -0.017 0.015 -0.14 -0.155 -0.157 -0.258 -0.22 -0.086 -0.151 -0.163 -0.157 -0.137 0.379 0.0 -0.529

orgcmt8 -0.203 -0.154 -0.205 -0.205 -0.25 -0.269 -0.156 -0.173 -0.255 -0.245 -0.22 -0.341 -0.357 -0.242 -0.3 -0.327 -0.381 -0.272 0.42 0.322 0.0

The lower diagonal of this table shows the correlations among all indicators, whereas the upper
diagonal shows the differences between the sample correlations and model-implied correlations.

Correlations between indicators and components


OP OI AC_Joy AC_Love
cei1 0.781 0.262 0.295 -0.119
cei2 0.825 0.302 0.283 -0.157
cei3 0.77 0.314 0.286 -0.196
cei4 0.804 0.243 0.356 -0.193
cei5 0.801 0.288 0.34 -0.152
cei6 0.843 0.33 0.338 -0.224
cei7 0.776 0.299 0.309 -0.152
cei8 0.801 0.271 0.277 -0.138
ma1 0.332 0.787 0.445 -0.27

18
ma2 0.296 0.758 0.457 -0.255
ma3 0.259 0.637 0.42 -0.244
ma4 0.244 0.823 0.548 -0.419
ma5 0.287 0.811 0.484 -0.4
ma6 0.241 0.743 0.436 -0.213
orgcmt1 0.311 0.425 0.748 -0.365
orgcmt2 0.307 0.473 0.79 -0.361
orgcmt3 0.249 0.533 0.82 -0.416
orgcmt7 0.335 0.446 0.707 -0.264
orgcmt5 -0.165 -0.316 -0.417 0.796
orgcmt6 -0.041 -0.229 -0.198 0.71
orgcmt8 -0.253 -0.369 -0.42 0.781
This table shows the correlations between each indicator and all components. This information
may be used for re-specifying the relationships between indicators and components
(measurement model).

4. Export Results
 Users can export and store full results in csv format by clicking on [Export Result],
checking the Full result box in the “Export Result” window, and clicking on [OK].

5. View Individual Scores


 Users can view individual scores (i.e., standardized construct scores, unstandardized
construct scores, or indicator scores with missing values imputed) by clicking on [View
Individual Scores].

19
20
Analysis – Basic GSCA with Constrained Parameters

1. How to impose equality constraints on loadings


 In the “Assign Indicators to Constructs” window, select indicators whose loadings are
constrained to be equal.
 Constrain the loadings of the selected indicators to be identical by inserting a label (e.g., an
alphabet letter or number) in the “Equality Constraints” dialog box. Then, click on “OK”.

* Note that any loadings with the same label will be constrained to be equal. In the above
example, three indicators (ma1 – ma3) are chosen and labeled “a”, indicating that the loadings
for these indicators are constrained to be equal.

2. How to constrain loadings to user-defined values


 In the “Assign Indicators to Constructs” window, select an indicator whose loading is to
be fixed to a user-defined value.
 Constrain the loading of the selected indicator to a user-defined value by inserting that value
in the “User-Defined Constraints” dialog box. Then, click on “OK”.

21
* Note that user-defined values should be between 0 and 1 (exclusive).

3. How to impose equality constraints on path coefficients


 Double-click on the middle point of an individual path to be constrained in the model.
 In the “Constrain Path Coefficients” window, constrain the selected path coefficient by
inserting a label (alphabet or number) in the “Equality Constraints” dialog. Then, click on
“OK”.

 Repeat the above step for other path coefficients that are to be held equal to the first path
coefficient, using the same label.
 In the model, subsequently, users can see all chosen paths labeled the same (“B”), indicating
that they are constrained to be equal.

4. How to impose a user-defined constraint on path coefficients


 Double-click on the middle point of an individual path to be constrained in the model.
 In the “Constrain Path Coefficients” window, constrain the selected path coefficient to a
user-defined value (between 0 and 1) by inserting that value in the “User-Defined
Constraints” dialog. Then, click on “OK”.

22
 In the model, subsequently, users can see the path fixed to the defined value.
 Note that user-defined values should be between 0 and 1 (exclusive).

23
Analysis – Basic GSCA: Multigroup Analysis

* Note: To conduct a multiple-group analysis, users must include a categorical, grouping


variable in the data, which indicates group memberships of cases. Group memberships must be
denoted by integers (e.g., sex: 1 = male & 2 = female), not by non-numeric characters.

1. How to conduct a multi-group analysis without cross-group equality


constraints
 To begin, select “Analysis  Basic GSCA  multiple groups” under the [Analysis] menu.
 Select a grouping variable in the list of indicators in a dialog box. Then, click on “OK”. In
this example, “gender” was chosen.

 Users then specify their measurement and structural models in the same way as described for
a basic, single group analysis.
 Once the above steps are complete, users can run GSCA Pro for fitting the specified model to
multiple groups simultaneously. This is done by clicking on the [Run] shortcut.
 As shown below, all multi-group analysis results are displayed in the “Results” window. In
this example, the same model was applied to two groups (males and females) at the same
time. Thus, all parameter estimates are provided for each of the two groups labeled Group 1
and Group 2.

24
2. How to conduct a multi-group analysis with cross-group equality
constraints
 To impose cross-group equality constraints on loadings, select indicators whose loadings are
constrained to be equal across groups in the “Assign Indicators to Constructs” window.
Then, constrain the loadings of the selected indicators to be identical across groups by
inserting a label (alphabet or number) in the “Equality Constraints” dialog box. Then, click
on “OK”.

 To impose cross-group equality constraints on path coefficients, clicking on the middle point
of a path in the model. Then, constrain the selected path coefficient to be identical across
groups by inserting a label in the “Equality Constraints” dialog box. Then, click on “OK”.

25
 Note that any loadings and path coefficients with the same label will be constrained to be
equal across groups.

26
Analysis – GSCA with 2nd-order Components

Users can specify and examine a model that involves second-order components (Hwang &
Takane, 2014, Chapter 3).

 To begin, select “Analysis  GSCA with 2nd-order Components  single group or


multiple groups” under the [Analysis] menu.

 Users specify (first-order) components as described earlier.

 Subsequently, users specify second-order components as follows:


o Click once on [Add 2nd-Order Component] in the Tool Panel. Click the left mouse
button with the cursor placed in the Main Window as many times as the number of
second-order components, which appear as green hexagons. In the example below, a
second-order component, labeled high1, is assumed to be linked to AC_Joy and
AC_Love.
o After drawing all second-order components, double-click on an individual second-
order component.

27
o In the “Assign Indicators to Constructs” window,
 Users can rename the second-order component by typing a new name.
 User select the (first-order) components in the list and move them to the right-
hand dialog window (“Free” means a free loading to be estimated).
 Then, arrows connecting second-order components to their first-order components
will appear in the model.
 Optionally,
 Users can specify the second-order component as a canonical component, which
does not involve loadings. In this case, the second-order component is
connected to its first-order component by a straight line.
 If users want to align the sign of each first-order component’s weight with the
correlation of a certain first-order component (a sign-fixing indicator) with the
second-order component, they can indicate which first-order component is used
as the sign-fixing indicator.
 Users can impose constraints on loadings for 2nd-order components. This can
be done in the “Constrain Path Coefficients” window. Users can access this
window by double-clicking on the middle point of an individual arrow from a
2nd order component to (first-order) components.
 Click on OK.
o Repeat the above steps for the remaining second-order components.

 Users draw path coefficients to complete their structural model.

28
29
Analysis – Nonlinear GSCA

Users can apply nonlinear GSCA when indicators are not continuous (i.e., nominal or ordinal)
(Hwang & Takane, 2014, Chapter 5).

 To begin, select “Analysis  Nonlinear GSCA  single group or multiple groups” under
the [Analysis] menu.

 Users then specify their measurement and structural models as in a basic analysis, as
described earlier.
 Double-click on an individual component (a hexagon). Then, the “Assign Indicators to
Constructs” window will appear.
 In the “Assign Indicators to Constructs” window, users select an indicator and then choose
the indicator’s type in a right-hand dialog box called “Indicator Type” and click on “OK” in
the box.

30
Analysis – GSCA with Component Interactions

Users can specify and examine interaction terms of components (Hwang et al., 2021; Hwang,
Ho, & Lee, 2010; Hwang & Takane, 2014, Chapter 6).

 To begin, select “Analysis  Component Interaction” under the [Analysis] menu.

 Users then specify their measurement and structural models without component interaction
terms as in a basic analysis, as described earlier.
 To add component interaction terms, click once on [Add Component Interaction] in the
Tool Panel.
 Click the left mouse button with the cursor placed in the Main Window as many times as the
number of component terms. In the present example, one interaction term between OP and
OI is specified.
 Double-click on a component interaction term. Then, in the “Assign Components to
Interaction Terms” window, move each component of the interaction term and click on
“OK.”

 Repeat the above step for remaining interaction terms.


 Add the paths of the interaction terms.

31
 Optionally, users can conduct regularized estimation of path coefficients to avoid potential
multicollinearity in the structural model. Such multicollinearity may occur because
component interaction terms tend to be highly correlated with their components.
o To apply regularization, click once on [Regularization] in the Tool Panel.
o In the “Regularization” window, users can choose either ridge or lasso
regularization. Also, they can choose the range of candidate penalty parameters and
the number of data splits for cross validation (i.e., K).

32
Analysis – Multilevel GSCA

Note: To apply multilevel GSCA, users must include a categorical, grouping variable in the data,
which indicates second-level units. GSCA Pro currently provides a two-level analysis only
(Hwang & Takane, 2014, Chapter 7; Hwang, Takane, & Malhotra, 2007).

 To begin, select “Analysis  Multilevel GSCA” under the [Analysis] menu.


 Select a second-level variable in the list of indicators in the data file in a dialog box. Then,
click on “OK”.

 Then, specify both measurement and structural models as described earlier.

33
Analysis – Regularized GSCA

Users can obtained regularized parameter estimates for addressing multicollinearity or selecting
variables (Hwang, 2009; Hwang & Takane, 2014, Chapters 8, 9).

 To begin, select “Analysis  Regularized GSCA  single group or multiple groups”


under the [Analysis] menu.

 Specify both measurement and structural models.


 Then, click once on [Regularization] in the Tool Panel.
 In the “Regularization” window, users can choose either ridge or lasso regularization. Also,
they can choose the range of candidate parameters of each penalty term for each parameter set
(weight, loading, and path coefficient) and the number of data splits for cross validation (i.e.,
K).

34
Analysis – IGSCA

Users can apply Integrated GSCA (IGSCA) for estimating a model that contains both common
factors and components (Hwang et al., 2020)

 To begin, select “Analysis  IGSCA  single group or multiple groups” under the
[Analysis] menu.

 Users can add components as described earlier.


 To add factors, click once on [Add Factor] in the Tool Panel. Click the left mouse button
with the cursor placed in the Main Window as many times as the number of factors. Each
factor is displayed by a circle.
 Then, assign indicators to each factor.

 Specify the structural model.

35
Post-Analysis – Model Comparison

Users can compare competing models (e.g., constrained and unconstrained models) after fitting
the models to the same data (Cho et al., 2019; Hwang & Takane, 2014, Chapter 3).

 Specify a model or open an existing


model from a project.
 Users can specify as many competing
models as they want by clicking on the
[+] box in the Model Bar.
 In the “Add Model” window, click on
“New Model” and then a new blank page
appears in the Main Window for
specifying another model (refer to Figure
(1)).
 In the “Add Model” window, users can
also copy and paste an existing model by
clicking on the “copy” dialog box. Then,
the model appears on a new page in the
Main Window and users can modify it
(refer to Figure (2)).

 Likewise, users can delete any model by clicking on the [-] box in the Model Bar.
o In the present example, we specified the following competing models.

 Users can fit specified models individually by clicking on the [Run] shortcut for each model
or fit all models simultaneously by clicking on the [Run All] shortcut.
o IMPORTANT: Before fitting models, users should indicate which models they plan to
compare. To do this, select the [Preference] shortcut and select “Yes” in the bottom
option of Model Comparison per model and click on “Apply”. If you click on “Apply
all”, all the preference options that you set up for a model will also be applied to the other
models.

36
 After fitting all models, select “Post-Analysis  Model Comparison” under the [Analysis]
menu.
 In the left-hand “Model Name” dialog box of the [Model Comparison] window, move all
the models that users want to compare to the right-hand “Models to be compared” dialog
box. Note that the middle “Model comparability” dialog box shows which models are
directly comparable. Only the models with the same label can be compared in a pair-wise
manner.
 Users can choose which model fit index(es) they will use for comparing models in the bottom
dialog box “Fit measures.”

37
 In the “Result” window, each pair of the selected competing models is compared based on
each fit index. For example, if the FIT difference between two models is statistically
significant (i.e., its 95% confidence interval does not contain a zero), the model with the
larger FIT value may be preferred in terms of explanation power of the sample at hand. If the
OPE difference between two models is statistically significant, the model with the smaller
OPE may be preferred in terms of prediction power of unseen samples.

38
Post-Analysis – Mediation Analysis

After fitting a model, users can calculate an indirect effect of a variable (component or indicator)
and examine its statistical significance (Hwang & Takane, 2014, Chapter 3).

 After fitting a model, select “Post-Analysis 


Mediation Analysis” under the menu of [Analysis].
 In the “Mediation Analysis” window, indicate how
many paths are involved in an indirect effect of
interest in the “Number of Paths” box and click on
“Confirm.”
o In the present example, if users want to test the
indirect effect of OP on AC_Joy through OI,
there are two paths involved (i.e., OP  OI
and OI  AC_Joy). Thus, they can put 2 in
the Number of Paths box.
 Then, assign a variable (component or indicator) to each of the
small boxes and click on “Run” in the bottom.
o In the present example, if users want to test the indirect effect of OP on AC_Joy
through OI, they assign OP to the first box, OI to the second, and AC_Joy to the third
box.
 Users can test a maximum of three indirect effects per page. If they want to test more
indirect effects, they can add an extra page by clicking on “Add Page” in the bottom.

39
 In the “Result” window, users can view each indirect effect’s estimate and its standard error
and 95% confidence interval.

40
Post-Analysis – Conditional Process Analysis

Conditional process analsyis refers to an analytic approach that encompasess mediation,


moderation, moderated mediaton, and mediated moderation analyses (Hayes, 2013; Hayes &
Preacher, 2013). GSCA Pro enables users to conduct a conditional process analysis that involves
components or indicators. Before conducting this analysis, users first specify and fit a model
with component interaction terms. Hwang et al. (2021) provides an example of a conditional
process analysis in GSCA.
Below, we explain how to compute and test the indirect effect of a variable (component
or indicator) at user-defined, specific values of moderators. For illustration, we consider the
following model.

 After fitting a model, select “Post-Analysis  Conditional Process Analysis” under the
[Analysis] menu.
 In the “Conditional Process Analysis” window, indicate how many paths are involved in a
(conditional) direct or indirect effect of interest in the “Number of Paths” box and click on
“Confirm.”
o In the present example, if users want to test the indirect effect of OP on AC_Joy
mediated through OI at a certain value of OP, there are two paths involved (i.e., OP
 OI and OI  AC_Joy). Thus, they can put 2 in the Number of Paths box.
 Then, assign a variable (component or indicator) to each of the small boxes. Then, GSCA Pro
automatically searches moderators that are involved in a mediating pathway of interest and
ask users to add the values of the moderators. Then, click on “Run” in the bottom.
o In the example, if users want to test the indirect effect of OP on AC_Joy through OI
at OP = -1, they assign OP to the first box, OI to the second, and AC_Joy to the third
box. Then, they add -1 to the box appearing below the second path.

41
 Users can test three indirect effects per page. If they want to test more effects, they can add
an extra page by clicking on “Add Page” in the bottom.
 In the “Result” window, users can view each effect’s estimate and its standard error and
95% confidence interval.

42
Preference – Estimation Options

 Users can set up their own estimation options. This can be done by clicking on the
[Preference] shortcut.

 As GSCA utilizes an iterative algorithm for parameter estimation, users need to decide on the
maximum number of iterations, a tolerance level (of the optimization function difference
between two consecutive iterations), and initial values for weights. By default, the maximum
number of iterations = 100, tolerance level = .0001, and equal initial values are used for
weights. Users can change the maximum number of iterations and tolerance level and assign
random initial values to weights.
 As GSCA uses the bootstrap method (Efron, 1982) to obtain the standard errors and 95%
confidence intervals of parameter estimates, users need to prescribe the number of bootstrap
samples. The default number of bootstrap samples is 100.
 GSCA Pro currently provides three options for handling missing values: (1) listwise deletion,
(2) mean substitution, and (3) least-squares imputation (Hwang & Takane, 2014, Chapter 3).
If the uploaded data contain missing observations, users choose one of the options and
specify which numeric value indicates missing observations. The default value indicating
missing observations is -9999. Users can change it to a user-defined value in the box of
[Missing Observation Value].
 If users consider comparing a group of models, they should select “Yes” in the [Model
Comparison] option for each model in the comparison group. Then, GSCA Pro will save all
necessary information on each fitted model for a post-analysis of model comparison. This
option may have GSCA Pro use a high amount of RAM, tending to decrease computational
speed particularly when sample size and/or the number of models to be compared is large.

43
 Users can set up their preference options for each model separately by clicking on the
[Apply] button per model number. Also, they can apply the same preference options to all
models at one by clicking on the [Apply all] button.
 The option Autosave (default = Yes) is available that automatically saves the current
changes or progress in the program.

44
References
Benitez, J., Henseler, J., Castillo, A., & Schuberth, F. (2020). How to perform and report an
impactful analysis using partial least squares: Guidelines for confirmatory and explanatory
IS research. Information & Management, 57 (2), 103168.
https://doi.org/10.1016/j.im.2019.05.003.
Bergami, M., & Bagozzi, R. P. (2000). Self-categorization, affective commitment and group self-
esteem as distinct aspects of social identity in the organization. British Journal of Social
Psychology, 39, 555–577. https://doi.org/10.1348/014466600164633
Cho, G., Hwang, H., Sarstedt, M., & Ringle, C. M. (2020). Cutoff criteria for overall model fit
indexes in generalized structured component analysis. Journal of Marketing Analytics, 8,
189–202. https://doi.org/10.1057/s41270-020-00089-1
Cho, G., Jung, K., & Hwang, H. (2019). Out-of-bag prediction error: A cross validation index for
generalized structured component analysis. Multivariate Behavioral Research, 54(4), 505–
513. https://doi.org/10.1080/00273171.2018.1540340
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillside, NJ:
Lawrence Erlbaum Associates.
Efron, B. (1982). The jackknife, the bootstrap and other resampling plans. Philadelphia, PA:
SIAM. https://doi.org/10.1137/1.9781611970319
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable
variables and measurement error. Journal of Marketing Research, 18(1), 39–50.
https://doi.org/10.2307/3151312
Hair, J., Ringle, C., & Sarstedt, M. (2011). PLS-SEM: Indeed a silver bullet. Journal of
Marketing Theory and Practice, 19, 139-151. https://doi.org/10.2753/MTP1069-
6679190202
Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the
results of PLS-SEM. European Business Review, 31(1), 2–24. https://doi.org/10.1108/EBR-
11-2018-0203
Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A
regression-based approach. New York, NY: Guilford Press.
Hayes, A. F., & Preacher, K. J. (2013). Conditional process modeling: Using structural equation
modeling to examine contingent causal processes. In Quantitative Methods in Education
and the Behavioral Sciences: Issues, Research, and Teaching. Structural equation
modeling: A second course, 2nd ed. (pp. 219–266). Charlotte, NC, US: IAP Information
Age Publishing.
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant
validity in variance-based structural equation modeling. Journal of the Academy of
Marketing Science, 43(1), 115–135. https://doi.org/10.1007/s11747-014-0403-8
Hwang, H. (2009). Regularized generalized structured component analysis. Psychometrika,
74(3), 517–530. https://doi.org/10.1007/S11336-009-9119-Y
Hwang, H., Cho, G., Jin, M. J., Ryoo, J. H., Choi, Y., & Lee, S. H. (2021). A knowledge-based
multivariate statistical method for examining gene-brain-behavioral/cognitive relationships:
Imaging genetics generalized structured component analysis. PloS One, 16(3), e0247592.
https://doi.org/10.1371/journal.pone.0247592
Hwang, H., Cho, G., Jung, K., Falk, C., Flake, J., & Jin, M. (2020). An approach to structural
equation modeling with both factors and components: Integrated generalized structured

45
component analysis. Psychological Methods, (Advance online publication).
https://doi.org/10.1037/met0000336.
Hwang, H., Ho, M.-H. R., & Lee, J. (2010). Generalized Structured Component Analysis with
Latent Interactions. Psychometrika, 75(2), 228–242. https://doi.org/10.1007/s11336-010-
9157-5
Hwang, H., & Takane, Y. (2014). Generalized structured component analysis: A component-
based approach to structural equation modeling. New York, NY: Chapman and Hall/CRC
Press.
Hwang, H., Takane, Y., & Malhotra, N. (2007). Multilevel generalized structural component
analysis. Behaviormetrika, 34(2), 95–109. https://doi.org/10.2333/bhmk.34.95
Jolliffe, I. T., & Cadima, J. (2016). Principal component analysis: a review and recent
developments. Philosophical Transactions of The Royal Society A Mathematical Physical
and Engineering Sciences. doi: 10.1098/rsta.2015.0202.
Myers, R. H. (1990) Classical and modern regression with applications. PWS-Kent Publishing,
Boston.
Rönkkö, M., & Cho, E. (2020). An updated guideline for assessing discriminant validity.
Organizational Research Methods. Published online.
https://doi.org/10.1177/1094428120968614

46

You might also like