Once developed a model
with the math treatments we consider adequate, and using the Calibration
Samples for Instrument 1 (following with the Shootout_2002 tutorial), the idea
is to check if that model performs fine with Instrument 2 for exactly the same
samples. A bias is expected, because even being the same model of instrument, differences
in the hardware components, optics, alignments,…, apart from some others are
the cause of this bias.
Some time ago I develop a
function to monitor, to plot and obtain the statistics necessaries to take some
decisions as if the bias or slope should be adjusted, to check for outliers of
high residuals,.....
Due to the high RMSEP of the model used for this monitor (RMSEP=4.33 using 4 terms and with all the samples), the bias must be quite high in order to be consider that it should be adjusted. This error is more than 3 times the Lab error.
So the statistics are:
monitor14(monit.tr2[,2],monit.tr2[,1],155,4,0.95,4.33)
Where 0.95 is the confidence interval and 4.33 the CV error of the model using 4 terms
N Validation Samples = 155 N Calibration Samples = 155 N Calibration Terms = 4 ------------------------------------- RMSEP : 4.942 Bias : -2.509 SEP : 4.272 UECLs : 4.951 ***SEP is bellow BCLs (O.K)*** Corr : 0.9811 RSQ : 0.9626 Slope : 0.9813 Intercept: 6.122 RER : 19.9 Fair RPD : 5.146 Good BCL(+/-): 0.6778 ***Bias adjustment in not necessary*** Residual Std Dev is : 4.266 ***Slope adjustment in not necessary***
We can see how the SEP (error corrected by the bias) is similar to the error of
the model, so a bias adjustment will help to transfer the model from instrument 1
to Instrument 2.
I will remove the 5 samples and come back with the results.