there, we now actually have a series of relations among all three of those constructs. Between the two
exogenous constructs, we have some estimate of correlation between the dependent and the
independent constructs, we have some structural relations, some standardized beta weight kinds of
things, what we can actually do in this system is we can now get predicted scores for each of these
based on the structural relations that exist there. And so in an iterative process, the information that we
got from this first pass at estimating the structural connections allows us to get new estimates for those
proxies.
Patrick
29:15
So when we think about how we do business, as usual, we're done right? If we're doing the least
squares, or if we have a second dependent variable, that's a mediating model, and we're using
maximum likelihood. That's it right? What you walk through is we take our items, we make a composite,
we take the composites, we fit a model. But here what you're saying is, is Wait a minute, we've got
these model implied predicted values of the composite, could we use those in some way to update how
we're computing the composite itself?
Greg
29:47
Exactly right. And so once we do that, based on the estimated relations that we have among the
constructs, and there are different ways of doing that, but it involves whatever a construct is attached to
Whether it's another exogenous construct and endogenous construct, again, there are different ways of
doing it. But you try to use the information from that model to get updated scores, just as you said, once
you do that, what happens to those updated scores? Well, you take them from that structural model in
this world, it's referred to as the inner model, and then you carry it back out to the measurement model,
or what here is called the outer model. And how you do that is going to depend whether or not you have
a construct that is mode A, which is reflective, traditional the way you and I think about things, or Mode
B, if you have a mode a system, then you can get predicted scores for each of those measured
indicators. By doing just a simple regression, I can use the proxy to predict indicator one, I can use the
proxy to predict indicator two, etc, I can go through one by one and do that in that reflective or mode a
kind of system, when I have something that is Mode B that is a formative kind of system. That's where
all the variables are actually coming into the construct. That's just a multiple regression, because I now
have updated scores for the construct, and I have all of the scores for the indicator variables, I can run
a multiple regression. And now I have updated relations between the indicator variables. And the
particular construct. Once I have those, what I can actually do now is get new predictive values for that
inner model. And there is this iterative process. And you alluded to that earlier, where we go intermodal
outer model, intermodal outer model until things stabilize. And when things converge satisfactorily, then
we have our final estimates of those constructs. And we just do an OLS regression boom, done and
done with the idea that we have created a system whose goal in the end is to try to explain variance.
And that's different from what you and I are used to.
Patrick
31:56
That's right, and you start moving into the topic that's getting increasing appreciation in the field, often
linked to machine learning, which is distinguishing prediction and explanation, we can have a prediction
model, that is, look, we're going to roll up our sleeves, and we're going to do whatever we need to do to
maximize our ability to make y and y hat as close together as possible. That is not as much of what we