Chapter Contents Previous Next
 The CALIS Procedure

The correlation matrix from Kinzer and Kinzer (N=326) is used by Guttman (1957) as an example that yields an approximate simplex. McDonald (1980) uses this data set as an example of factor analysis where he supposes that the loadings of the second factor are a linear function of the loadings on the first factor, for example

This example is also discussed in Browne (1982). The matrix specification of the model is
C = F1F'1
with
This example is recomputed by PROC CALIS to illustrate a simple application of the COSAN model statement combined with program statements. This example also serves to illustrate the identification problem.
   data Kinzer(TYPE=CORR);
Title "Data Matrix of Kinzer & Kinzer, see GUTTMAN (1957)";
_TYPE_ = 'CORR'; INPUT _NAME_ \$ Obs1-Obs6;
datalines;
Obs1  1.00   .     .     .     .     .
Obs2   .51  1.00   .     .     .     .
Obs3   .46   .51  1.00   .     .     .
Obs4   .46   .47   .54  1.00   .     .
Obs5   .40   .39   .49   .57  1.00   .
Obs6   .33   .39   .47   .45   .56  1.00
;


In a first test run of PROC CALIS, the same model is used as reported in McDonald (1980). Using the Levenberg-Marquardt optimization algorithm, this example specifies maximum likelihood estimation in the following code:

   proc calis data=Kinzer method=max outram=ram nobs=326;
Title2 "Linearly Related Factor Analysis, (Mcdonald,1980)";
Title3 "Identification Problem";
Cosan F(8,Gen) * I(8,Ide);
Matrix F
[ ,1]= X1-X6,
[ ,2]= X7-X12,
[1,3]= X13-X18;
Parms Alfa = .5 Beta = -.5;
X7  = Alfa + Beta * X1;
X8  = Alfa + Beta * X2;
X9  = Alfa + Beta * X3;
X10 = Alfa + Beta * X4;
X11 = Alfa + Beta * X5;
X12 = Alfa + Beta * X6;
Bounds X13-X18 >= 0.;
Vnames F Fact1 Fact2 Uvar1-Uvar6;
run;


The pattern of the initial values is displayed in vector and in matrix form. You should always read this output very carefully, particularly when you use your own programming statements to constrain the matrix elements. The vector form shows the mapping of the model parameters to indices of the vector X that is optimized. The matrix form indicates parameter elements that are constrained by program statements by indices of X in angle brackets ( < > ). An asterisk trailing the iteration number in the displayed optimization history of the Levenberg-Marquardt algorithm indicates that the optimization process encountered a singular Hessian matrix. When this happens, especially in the last iterations, the model may not be properly identified. The computed value of 10.337 for 7 degrees of freedom and the computed unique loadings agree with those reported by McDonald (1980), but the maximum likelihood estimates for the common factor loadings differ to some degree. The common factor loadings can be subjected to transformations that do not increase the value of the optimization criterion because the problem is not identified. An estimation problem that is not fully identified can lead to different solutions caused only by different initial values, different optimization techniques, or computers with different machine precision or floating-point arithmetic.

To overcome the identification problem in the first model, restart PROC CALIS with a simple modification to the model in which the former parameter X1 is fixed to 0. This leads to 8 instead of 7 degrees of freedom. The following code produces results that are partially displayed in Output 19.4.1.

   data ram2(TYPE=RAM); set ram;
if _type_ = 'ESTIM' then
if _name_ = 'X1' then do;
_name_ = ' '; _estim_ = 0.;
end;
run;

proc calis data=Kinzer method=max inram=ram2 nobs=326;
Title2 "Linearly Related Factor Analysis, (Mcdonald,1980)";
Title3 "Identified Model";
Parms Alfa = .5 Beta = -.5;
X7  = Alfa;
X8  = Alfa + Beta * X2;
X9  = Alfa + Beta * X3;
X10 = Alfa + Beta * X4;
X11 = Alfa + Beta * X5;
X12 = Alfa + Beta * X6;
Bounds X13-X18 >= 0.;
run;


Output 19.4.1: Linearly Related Factor Analysis: Identification Problem

 Data Matrix of Kinzer & Kinzer, see GUTTMAN (1957) Linearly Related Factor Analysis, (Mcdonald,1980) Identified Model

 The CALIS Procedure Covariance Structure Analysis: Pattern and Initial Values

 COSAN Model Statement Matrix Rows Columns Matrix Type Term 1 1 F 6 8 GENERAL 2 I 8 8 IDENTITY

 Data Matrix of Kinzer & Kinzer, see GUTTMAN (1957) Linearly Related Factor Analysis, (Mcdonald,1980) Identified Model

 The CALIS Procedure Covariance Structure Analysis: Maximum Likelihood Estimation

 Parameter Estimates 13 Functions (Observations) 21 Lower Bounds 6 Upper Bounds 0

 Optimization Start Active Constraints 0 Objective Function 0.3234289189 Max Abs Gradient Element 2.2633860283 Radius 5.8468569273

 Data Matrix of Kinzer & Kinzer, see GUTTMAN (1957) Linearly Related Factor Analysis, (Mcdonald,1980) Identified Model

 The CALIS Procedure Covariance Structure Analysis: Maximum Likelihood Estimation

 Iteration Restarts FunctionCalls ActiveConstraints ObjectiveFunction ObjectiveFunctionChange Max AbsGradientElement Lambda RatioBetweenActualandPredictedChange 1 0 2 0 0.07994 0.2435 0.3984 0 0.557 2 0 3 0 0.03334 0.0466 0.0672 0 1.202 3 0 4 0 0.03185 0.00150 0.00439 0 1.058 4 0 5 0 0.03181 0.000034 0.00236 0 0.811 5 0 6 0 0.03181 3.982E-6 0.000775 0 0.591 6 0 7 0 0.03181 9.275E-7 0.000490 0 0.543 7 0 8 0 0.03181 2.402E-7 0.000206 0 0.526 8 0 9 0 0.03181 6.336E-8 0.000129 0 0.514 9 0 10 0 0.03181 1.687E-8 0.000054 0 0.505 10 0 11 0 0.03181 4.521E-9 0.000034 0 0.498 11 0 12 0 0.03181 1.217E-9 0.000014 0 0.493 12 0 13 0 0.03181 3.29E-10 8.971E-6 0 0.489

 Optimization Results Iterations 12 Function Calls 14 Jacobian Calls 13 Active Constraints 0 Objective Function 0.0318073951 Max Abs Gradient Element 8.9711916E-6 Lambda 0 Actual Over Pred Change 0.4888109559 Radius 0.0002016088

 ABSGCONV convergence criterion satisfied.

 Data Matrix of Kinzer & Kinzer, see GUTTMAN (1957) Linearly Related Factor Analysis, (Mcdonald,1980) Identified Model

 The CALIS Procedure Covariance Structure Analysis: Maximum Likelihood Estimation

 Fit Function 0.0318 Goodness of Fit Index (GFI) 0.9897 GFI Adjusted for Degrees of Freedom (AGFI) 0.9730 Root Mean Square Residual (RMR) 0.0409 Parsimonious GFI (Mulaik, 1989) 0.5278 Chi-Square 10.3374 Chi-Square DF 8 Pr > Chi-Square 0.2421 Independence Model Chi-Square 682.87 Independence Model Chi-Square DF 15 RMSEA Estimate 0.0300 RMSEA 90% Lower Confidence Limit . RMSEA 90% Upper Confidence Limit 0.0756 ECVI Estimate 0.1136 ECVI 90% Lower Confidence Limit . ECVI 90% Upper Confidence Limit 0.1525 Probability of Close Fit 0.7137 Bentler's Comparative Fit Index 0.9965 Normal Theory Reweighted LS Chi-Square 10.1441 Akaike's Information Criterion -5.6626 Bozdogan's (1987) CAIC -43.9578 Schwarz's Bayesian Criterion -35.9578 McDonald's (1989) Centrality 0.9964 Bentler & Bonett's (1980) Non-normed Index 0.9934 Bentler & Bonett's (1980) NFI 0.9849 James, Mulaik, & Brett (1982) Parsimonious NFI 0.5253 Z-Test of Wilson & Hilferty (1931) 0.7019 Bollen (1986) Normed Index Rho1 0.9716 Bollen (1988) Non-normed Index Delta2 0.9965 Hoelter's (1983) Critical N 489

 Data Matrix of Kinzer & Kinzer, see GUTTMAN (1957) Linearly Related Factor Analysis, (Mcdonald,1980) Identified Model

 The CALIS Procedure Covariance Structure Analysis: Maximum Likelihood Estimation

 Estimated Parameter Matrix F[6:8]Standard Errors and t ValuesGeneral Matrix Fact1 Fact2 Uvar1 Uvar2 Uvar3 Uvar4 Uvar5 Uvar6 Obs1 0 0 0 0.7151 0.0405 17.6382 0.7283 0.0408 17.8276 [X13] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Obs2 -0.0543 0.1042 -0.5215 [X2] 0.7294 0.0438 16.6655 0 0 0 0.6707 0.0472 14.2059 [X14] 0 0 0 0 0 0 0 0 0 0 0 0 Obs3 0.1710 0.0845 2.0249 [X3] 0.6703 0.0396 16.9077 0 0 0 0 0 0 0.6983 0.0324 21.5473 [X15] 0 0 0 0 0 0 0 0 0 Obs4 0.2922 0.0829 3.5224 [X4] 0.6385 0.0462 13.8352 0 0 0 0 0 0 0 0 0 0.6876 0.0319 21.5791 [X16] 0 0 0 0 0 0 Obs5 0.5987 0.1003 5.9665 [X5] 0.5582 0.0730 7.6504 0 0 0 0 0 0 0 0 0 0 0 0 0.5579 0.0798 6.9938 [X17] 0 0 0 Obs6 0.4278 0.0913 4.6844 [X6] 0.6029 0.0586 10.2928 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.7336 0.0400 18.3580 [X18]

 Data Matrix of Kinzer & Kinzer, see GUTTMAN (1957) Linearly Related Factor Analysis, (Mcdonald,1980) Identified Model

 The CALIS Procedure Covariance Structure Analysis: Maximum Likelihood Estimation

 Additional PARMS and Dependent Parameters The Number of Dependent Parameters is 6 Parameter Estimate StandardError t Value Alfa 0.71511 0.04054 17.64 Beta -0.26217 0.12966 -2.02 X7 0.71511 0.04054 17.64 X8 0.72936 0.04376 16.67 X9 0.67027 0.03964 16.91 X10 0.63851 0.04615 13.84 X11 0.55815 0.07296 7.65 X12 0.60295 0.05858 10.29

The lambda value of the iteration history indicates that Newton steps can always be performed. Because no singular Hessian matrices (which can slow down the convergence rate considerably) are computed, this example needs just 12 iterations compared to the 17 needed in the previous example. Note that the number of iterations may be machine-dependent. The value of the fit funciton, the residuals, and the value agree with the values obtained in fitting the first model. This indicates that this second model is better identified than the first one. It is fully identified, as indicated by the fact that the Hessian matrix is nonsingular.

 Chapter Contents Previous Next Top