Nothing Special   »   [go: up one dir, main page]

Presenter:: Prof. Richard Chinomona

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 55

Structural Equation Modeling in Management

Sciences Research
PRESENTER:
Prof. RICHARD CHINOMONA
VAAL UNIVERSITY OF TECHNOLOGY
SOUTH AFRICA
POSTGRADUATE & STAFF
DEVELOPMENT PROGRAMME
FACILITATION
Outline of the Presentation
Preamble
Structure of a Good Research
Research Methodology & Design
Statistical Research Approaches (Structural
Equation Modeling SEM)
What is SEM?
Why use SEM?
Common SEM Software used in Business Research
Advantages & Disadvantages of the SEM software

Preamble
A good research paper usually has a good:
Introduction
Literature Review
Conceptual Model & Hypothesis
Development
Research Design
Data Analysis & Results
Discussion of Results & Implications
Limitations & Future Research

RESEARCH
METHODOLOGY


DATA ANALYSIS STATISTICAL
APPROACHES

STRUCTURAL EQUATION
MODELING (SEM)
SEM Some Origins
Psychology Factor Analysis:
Spearman (1904), Thurstone (1935, 1947)
Human Genetics Regression Analysis:
Galton (1889)
Biology Path Modeling:
S. Wright (1934)
Economics Simultaneous Equation Modeling:
Haavelmo (1934), Koopmans (1953), Wold (1954)
Statistics Method of Maximum Likelihood Estimation
R.A. Fisher (1921), Lawley (1940)
Synthesis into Modern SEM and Factor Analysis
Joreskog (1970), Lawley & Maxwell (1971), Goldberger
& Duncan (1973)

Structural Equation Modeling
(SEM) Approach
What is SEM
It is a statistical technique that combines
elements of traditional multivariate model,
such as regression analysis, factor analysis
and simultaneous equation modeling.
General approach to analyse multivariate
data for theory testing (e.g., Bagozzi,
1980).
SEM in a nutshell
Combination of factor analysis and
regression
Continuous and discrete predictors and
outcomes
Relationships among measured or latent
variables
Direct link between Path Diagrams and
equations and fit statistics
Models contain both measurement and
path models
Why Use SEM ?
SEM lends itself well to the analysis of data for
inferential purposes.

Whereas, traditional multivariate procedures are
incapable of either assessing or correcting for
measurement error, SEM provides explicit estimates
of these parameters.

SEM procedures can incorporate both unobserved
(i.e. latent) and observed variables.
Advantages of Using SEM
Structural equation models go beyond
ordinary regression models to incorporate
multiple independent and dependent variables
as well as hypothetical latent constructs that
clusters of observed variables might represent.
They also provide a way to test the specified
set of relationships among observed and latent
variables as a whole, and allow theory testing
even when experiments are not possible.
As a result, these methods have become
ubiquitous in all the social and behavioral
sciences (e.g., MacCallum & Austin, 2000).
Common SEM Software used in Business
Survey Research

LISREL
AMOS
SMART PLS
MPLUS
SEM APPROACHES
COVARIANCE BASED APPROACH
Lisrel
Amos
Mplus
COMPONENT BASED APPROACH
Smart PLS
SEM - COMPONENT-BASED APPROACH
A two stage approach
Measurement model:
The part of the model that relates indicators to latent factors
The measurement model is the factor analytic part of SEM
Latent variables and their observed measures
Confirmatory Factor Analysis (CFA)
Structural model:
This is the part of the model that relates variable or factors to
one another (prediction)
Model with links among the latent variables.
Path Modeling

Model Fit
Checking the model fit for CFA & Path Model
Chi-square value,
Comparative fit index (CFI),
Goodness of fit index (GFI),
Incremental fit index (IFI),
Normed fit index (NFI),
Tucker Lewis index (TLI) &
Random measure of standard error
approximation (RMSEA).

SEM COMPONENT - BASED APPROACH

SEM (advancement of PLS approach -
Multiple Regression Analysis)


Uses Smart PLS software


SEM questions
Does the model produce an estimated
population covariance matrix that fits the
sample data?
SEM calculates many indices of fit; close fit,
absolute fit, etc.
What is the reliability of the indicators?
What are the parameter estimates from the
model?
Are there any indirect or mediating effects in
the model?

SEM limitations
SEM is a confirmatory approach
You need to have established theory about the
relationships
Cannot be used to explore possible relationships
when you have more than a handful of variables
Exploratory methods (e.g. model modification) can
be used on top of the original theory
Biggest limitation is sample size
It needs to be large to get stable estimates of the
covariances/correlations
200 subjects for small to medium sized model
A minimum of 10 subjects per estimated parameter
Also affected by effect size and required power
SEM limitations
Missing data
Can be dealt with in the typical ways (e.g.
regression, EM algorithm, etc.) through SPSS and
data screening
Most SEM programs will estimate missing data
and run the model simultaneously
Multivariate Normality and no outliers
Screen for univariate and multivariate outliers
SEM programs have tests for multi-normality
SEM programs have corrected estimators when
theres a violation
Steps in Performing SEM Analysis
Model Specification:
Set up a theory-driven factor model for nine variables,
in other words, a model containing latent or
unobserved variables
Estimation of Parameters:
Estimate parameters and determined that the first
model did not fit the data
Assess the Model & Model Fit:
Determine the source of the misfit by residual analysis
and modification indices
Model Modification:
Modify the model accordingly and estimate its
parameters
Interpretation & Communication:
Accept the fit on new model and interpret the results
Current Trends in SEM
Methodology Research
Statistical models and methodologies for
missing data
Combinations of latent trait and latent class
approaches
Bayesian models to deal with small sample
sizes (Smart PLS)
Non-linear measurement and structural
models
Extensions for non-random sampling, such as
multi-level models

STRUCTURAL EQUATION
MODELING WITH
ANALYSIS OF MOMENT
STRUCTURES (AMOS 21)
WHAT IS AMOS?
AMOS (Analysis of Moment Structures) is an add-
on module for SPSS.
It is designed primarily for structural equation
modeling, path analysis, and covariance structure
modeling, though it may be used to perform linear
regression analysis and ANOVA and ANCOVA.
It features an intuitive graphical interface that allows
the analyst to specify models by drawing them.
It also has a built-in bootstrapping routine and
superior handling of missing data.
It reads data from a number of sources, including
MS Excel spreadsheets and SPSS databases
SEM WITH AMOS
Amos enables you to specify, estimate, assess and present
models to show hypothesized relationships among variables.
The software lets you build models more accurately than with
standard multivariate statistics techniques.
Users can choose either the graphical user interface or non-
graphical, programmatic interface.
SPSS Amos allows you to build attitudinal and behavioral
models that reflect complex relationships. The software:
Provides structural equation modeling (SEM)that is easy
to use and lets you easily compare, confirm and refine models.
Uses Bayesian analysisto improve estimates of model
parameters.
Offers various data imputation methodsto create
different data sets.

RESEARCH VARIABLES
LATENT VARIABLES (UNOBSERVED VARIABLES)
Are variables that are not directly observed but are rather
inferred from other variables that are observed (directly
measured).
It is hypothesized to exist in order to explain other
variables, such as specific behaviors, that can be observed.
MANIFEST VARIABLES (OBSERVED VARIABLES)
An observed variable assumed to indicate the presence of a
latent variable.
Also known as an indicator variable.
E.G. We cannot observe intelligence directly, for it is a latent variable.
We can look at indicators such as vocabulary size, success in ones
occupation, IQ test score, ability to play complicated games (e.g., bridge)
well, writing ability, and so on.
LATENT & MANIFEST VARIABLES IN CONFAMIRTORY FACTOR ANALYSIS MODEL
LATENT & MANIFEST VARIABLES IN CONFAMIRTORY FACTOR ANALYSIS MODEL
AMOS USER GRAPHIC INTERFACE
27
Launching Amos
From among the many modules that
come with Amos, you want to start by
opening "Amos Graphics".
28
The Amos Graphics Graphical User Interface (GUI)
There are often several different
ways one can execute options in
Amos. In this tutorial, I will tend to
use icons rather than dropdown
menus. Also, numerous options
can be executed from a right click
of the mouse.
29
Step 1: Open your data file.
This icon, which looks like an
Excel spreadsheet, is used to
open data files and link them
to your model.
30
Step 1: Open your data file. (continued)
Once you click on the data file icon, you encounter the data file manager.
Now, click on File Name.
31
Step 1: Open your data file. (continued)
32
Step 1: Open your data file. (continued)
You will need to choose the sheet in Excel file you want to use before
clicking OK.
33
Step 1: Open your data file. (continued)
You can view the data in the datasheet either before or after selecting
this as the dataset you want to use. Once you say, OK, you have opened
the dataset and can access the variables.
34
Step 2: Dragging observed variables to the palette.
This icon gives you the list of
variables in the dataset.
35
Step 2: Dragging observed variables to the palette.
(continued)
By selecting and holding down
the mouse key, you can drag
variables to the palette.
36
Step 2: Dragging observed variables to the palette.
(continued)
Observed variables are shown
as rectangles.
37
Step 3: Once variables are in on palette, you can
move them and resize/reshape them (for aesthetics).
Tool to use for moving things.
Tool to use for reshaping things.
38
Step 3: Once variables are on palette, you can
move them and resize/reshape them. (continued)
39
Step 4: Drawing arrows.
Tool to use for drawing directional
arrows between variables.
If you want to erase anything
(e.g., arrow or variable), use
this tool.
40
Step 5: Adding error variables.
Response (endogenous) variables
require error terms. These error
terms are represented as error
variables. We can add them by
selecting this tool and clicking on
the response variables.
Try clicking the error term tool over
an endogenous variable repeatedly
to see what that does.
41
Step 6: Naming error variables.
All variables must have names,
this applies to the error variables
too. To name the error variables,
you must highlight them,
right click, and select Object
Properties.
42
Step 6: Naming error variables. (continued)
Simply by typing a name into
the form, we name the variable.
We can leave the window open
and click on any object to modify
its properties. To accept changes,
simple close the window by clicking
the square with the red X.
Note that error variables have
unstandardized regression weights
of 1.0, which are shown.
43
Step 7: Adding a title to our model.
Select the Title tool and then
click at the place on the palette
where you want the title to go.
The title tool gives you a chance
to both enter the caption text and
to set its properties.
44
Step 8: Now we had better save our model.
The icon that looks like a floppy
disc executes the save command.
45
Step 9: Setting the run parameters.
To set the run parameters, we
click this icon.
46
Step 9: Setting the run parameters. (continued)
There are a great many options
encountered here. Right now we only
need to be concerned with two of the
tabs. On the estimation tab, the only
thing we might be interested in is the
possibility of estimating the means and
intercepts. However, in this example, our
data only consist of correlations and
standard deviations, so we have no
information regarding the means. Thus,
we must leave that option unchecked.
47
Step 9: Setting the run parameters. (continued)
On the Output tab, there are many
options of interest. Right now we will
only select Standardized estimates and
Squared multiple correlations.

Clicking on the red X closes the window,
leaving us ready to run the model.
48
Step 10: Running the model (estimating parameters).
The abacus icon initiates the
calculation process.
49
Step 11: Getting to the results.
It is helpful to resize this
window so you can get a
peek at the chi-square and
df of the model after it runs.
To access the full results, this is
the icon you will need to click.
50
Step 11:
Looking at
results.
Amos uses a directory tree to organize
model output. You should look through
all the output to get familiar with where
information is located.
51
Step 11:
Looking at
results.
(continued)
Once we determine that our
model fit is acceptable, our
focus is placed on the
estimates, their standard
errors, the critical ratios
(which are like t-tests), and
the associated p-values.

Since we requested
standardized values, they
are presented in the output.
Measurement Instrument Reliability & Validity
Reliability (Consistency )
Reliability refers to the quality of a measurement procedure that provides repeatability and accuracy.
Reliability estimates are used to evaluate (1) the stability of measures administered at different times to the
same individuals or using the same standard (testretest reliability)or (2) the equivalence of sets of items from
the same test (internal consistency) or of different observers scoring a behavior or event using the same
instrument.
Reliability coefficients range from 0.00 to 1.00, with higher coefficients indicating higher levels of reliability.
A good measure of some entity is expected to produce consistent scores. A procedures' reliability is estimated
using a coefficient (i.e., a numerical summary). The major types of coefficients include:
Cronbach's Alpha value
Composite reliability value
Validity (Meaningfulness )
A valid measurement tool or procedure does a good job of measuring the concept that it purports to measure.
Validity means that correct procedures have been applied to find answers to a question.
Validity is often defined as the extent to which an instrument measures what it purports to measure.
Validity requires that an instrument is reliable, but an instrument can be reliable without being valid
Below are three main classes of validity, each having several subtypes
Convergent validity
Item loading (standardised regression weights)
Item-to total correlation values
Average variance extracted (AVE)
Discriminate validity
Inter-construct correlation matrix
Average variance extracted Vs Shared variance






Checking Research Model Fit
Checking the model fit for CFA & Path
Model
Chi-square value (<3).
Comparative fit index (CFI): - (> 0.900)
Goodness of fit index (GFI): (> 0.900)
Incremental fit index (IFI): (> 0.900)
Normed fit index (NFI): (> 0.900)
Tucker Lewis index (TLI): (> 0.900)
Random measure of standard error
approximation (RMSEA): (< 0.08)
THE END
THANK YOU!
THE CURRENT RESEARCH
METHODOLOGY FACILITATION
MATERIAL REMAINS A FULL-PROPERTY
OF PROFESSOR RICHARD CHINOMONA.
PERMISSION IS REQUIRED FIRST TO
USE THE CURRENT MATERIAL.

You might also like