In the post "Splitting spectral data into training and test sets" , we kept apart the even samples for a validation, so it is the time after the calculations in post (PCR vs PLS part 3) to see if the performance of the model, with this even validation sample set, is satisfactory.
First we can over-plot the predictions over the plot of the blue and red samples we saw in post (PCR vs PLS part 3), and the first impression is that they fit quite well, so the performance seems to be as expected during the development of the model.
plot(Xodd_pcr3.evenpred,Prot[even,],col="green",
bg=3,pch=23,xlim=c(40,52),ylim=c(40,52),
xlab="",ylab="")
legend("topleft",legend=c("Odd", "Odd CV","Even"),
col=c("blue","red","green"),pch=c(1,1,18),
cex=0.8,bg=)
abline(0,1)
The prediction error for the Even Test Set is:
RMSEP(Xodd_pcr3,ncomp=10,newdata=soy_ift_even$X_msc,
intercept=FALSE)
[1] 0.9616
Probably this "even" sample set is not really an independent set, so we need to go a step farther and check the model with a really independent set with samples taken in a different instrument, with reference values from different laboratories,.....This will be the argument of the next part in these series about PCR and PLS regressions.
No hay comentarios:
Publicar un comentario