We have seen that the weights are orthogonal in PLSR regression (What condition must have the weights in PLSR?), so the scores, which are projections of the spectra over the weights must be orthogonal as well.
Let´s do the same operations than "What condition must have the weights in PLSR?":
Xodd_pls3$scores[,1]
Xodd_pls3$scores[,2]
Xodd_pls3$scores[,3]
Xodd_pls3$scores[,4]
t.matrix<-cbind(Xodd_pls3$scores[,1],Xodd_pls3$scores[,2],
Xodd_pls3$scores[,3],Xodd_pls3$scores[,4])
round(t(t.matrix) %*% t.matrix,4)
[,1] [,2] [,3] [,4]
[1,] 146.3176 0.0000 0.0000 0.0000
[2,] 0.0000 0.4371 0.0000 0.0000
[3,] 0.0000 0.0000 0.0634 0.0000
[4,] 0.0000 0.0000 0.0000 0.0639
As we see we obtain a diagonal matrix, so the condition of orthogonality is fine.
But the difference with the weights, is that the weights, apart from being orthogonal, are orthonormal, because are normalized to length one.
But the difference with the weights, is that the weights, apart from being orthogonal, are orthonormal, because are normalized to length one.
> round(t(w.matrix) %*% w.matrix,4)
Xodd_pls3_w1 Xodd_pls3_w2 Xodd_pls3_w3 Xodd_pls3_w4
Xodd_pls3_w1 1 0 0 0
Xodd_pls3_w2 0 1 0 0
Xodd_pls3_w3 0 0 1 0
Xodd_pls3_w4 0 0 0 1
No hay comentarios:
Publicar un comentario