7 ago. 2017

Certified Reference Materials for Spectroscopy

A catalogue of “Starna” (www.starna.com) Certified Reference Materials for spectrometers came with the last issue of Spectroscopy Europe.
The materials have a function and a range. The function is:  The purpose of the filter (check Absorbance, check accuracy, check stray light or check the resolution. 
The range can cover the UV, Visible, NIR and FTIR zone of the electromagnetic zones.

In the case of NIR there are some filters like:

NIR Neutral Density Glass References (800 - 3200nm): To check the Absorbance accuracy and linearity pf NIR spectrometers.

Metal on Quartz filters (250 – 3200 nm): With Absorbance and Transmittance values certified at different wavelengths.

NIR Solution References (900 – 2600 nm): With 14 certified peaks for wavelength qualification purposes.

Chloroform Stray Light Cell (at approx. 2365 nm): To check Stray Light.

Polystyrene NIR References (NIR and MIR range): With 14 certified peaks in the MIR spectrum. In addition, eight peaks in the NIR spectrum. These calibration values are traceable to NIST SRM 2065.

Didymium Glass Filter (430 – 890 nm): This filter has 11 peaks covering this range (four peaks over 700 nm).

Wide Range Wavelength Reference (335 – 1945 nm): This filter 20 peaks in this range (nine of them over 700 nm). It is equivalent to NIST SRM 2065.


You can download the catalog from:
http://www.starna.com/images/reference_material_catalogue2017.pdf

1 ago. 2017

Checking Wavelength Accuracy (XDS)

It is important to check the accuracy of the wavelength peaks using if posible a NIST Standard like in this case.

Manufacturer send a file showing the accuracy of the instrument against this standard and we have to verify periodically to see if it shifts.

The verification tell us if the deviation is more than recommend, even if the diagnostics pass. In this case, we perform an Instrument Calibration and the values come closer to the values from which the instrument leaves the factory.
 
Delta value is the difference between the nominal value and the found value.

11 jul. 2017

Dror Sharon: This Tiny Molecular Sensor can Identify a Viagra Pill | WIR...


I am a reader of the Wired Magazine, and is nice to see how NIR technology is becoming a part of this digitalized world, so it will be amazing what this technology can bring to the future.

3 jul. 2017

Lid Adjustment - NIRS™ DS2500


The firsts DS2500 (Generation 1) don´t have this system to adjust the gap for the door, and the way to do it is more dificult. If your instrument does not have the 2 screws to remove the cover showed in the video is becouse is a "Generation 1" instrument .
The instrument showed in the video is a "Generation 2".

Considerig and check subsample variation

When analyzing a heterogeneous sample, several subsamples for a large cup are acquired, and finally an average result from all the subsamples is showed as result. Is to the average result to which we give normally importance and we compared to the lab value to know the accuracy of our measurement. Anyway is important to see the several and individual results for the different subsamples.

One reason for this is to check how homogeneous is our sample, looking to the standard deviation of the predictions for each of the constituents. But we can check that depending of the math treatment we have apply in the equation, the standard deviation of the subsample predictions change and in some cases became quite large. This is something we have to consider to make a robust calibration.
 
Remember always to look to the subsample spectra and get conclusions comparing the spectral RMS with the SD for the different subsamples.

22 jun. 2017

How to check the cooling liquid - NIRS™ DS2500


Filter Replacement - NIRS™ DS2500


Lamp Replacement - NIRS™ DS2500


Instrument Calibration - NIRS™ DS2500


Checking Temperatures in DS2500 (Lamp)



 
In order that the performance of the instrument DS2500 be optimal, we have to attend the temperature of the lamp when running the diagnostics. I consider it is fine around 35ºC.
Sometimes we find high temperatures like the one in the picture, and even seeing that the report says that is OK, this temperature can affect to the instrument itself and the results.
One of the causes that this temperature increase is that the tank of the pump has lost water, so it is a good idea to check the level, and fill it in if necessary.

Checking pump level video
Check that the pump is pumping. We should see some turbulences in the water and a small noise in the pump.
Check if the water is to dirty, or with algae’s.
Check that the fan is working, its mission is to keep cold the water and see if the filter is clean so the fan performs better its mission.

Changing the filter

It is important also the temperature of the room or laboratory where the instrument is. A higher temperature will increase also the lamp temperature.

After checking all this points, and being sure that the lamp is fine, maybe is the moment to run an instrument calibration:

Instrument Calibration

 

19 jun. 2017

Comparing Residuals, GH and T when validating


When looking to the validation statistics is important to look at the same time to three values: Residual, GH and T value for every sample. From this data (fiber), we can check if our sample is extrapolating badly, it is not robust or any other issues.

In this case, as we can see there are samples with a very high GH and we can see that those samples have in common that the T statistic is negative (in the left tail of the Gaussian Bell) and the value is quite high also for the T.
These samples have also the highest residiual values.
 Something is telling us that this samples have something special and are not well represented by the equation. PCA is warking fine and is detecting these samples as outliers, but we need to know what makes tese samples special.

These samples are soy meal and have  highest fat value as the ones in the calibration so the Model did not learn enough about the interaction between the fiber bands and fat bands. So this samples are very interested to make the calibtration more robust.

After checking this, we can add these samples to the calibration to improve the results of the next validation.

 
Graphically in Excel we can se the interaction between the Residuals, GHs and T values:
 

22 may. 2017

Mosaic 7.12 is now available on our Europe server


Mosaic version 7.12 is now available on our Europe server.
Once you try to connect, you should be asked to automatically download and install the new client.
User accounts, passwords remain the same.

Ports used for NOVA:
 
Configure correctly the ports with your IT for a successful synchonization.

7 may. 2017

Easy way to check the eigen values with the T (scores) matrix

Other interesting Matrix multiplication is the product of the score matrix T by it´s transpose in this way:

Tt%*%T

This product give us a square matrix (a.a), being “a” the number of loadings or PCs chosen, and the diagonal has the eigenvalues which are related to the quantity of explained variance for every loading.

If we plot the diagonal we can see how the eigenvalue decrease by every loading. This plot can help us to decide how many loadings or PCs to choose.

Add caption
 

6 may. 2017

Checking the orthogonality of P (loadings) matrix

One of the values we got in the script of the post:"Tutorials with Resemble (Part 3 - orthoProjection) " was the loadings matrix (X.loadings), or what we called usually in this blog the P matrix.

One of the characteristics of the loadings “P” matrix, when we develop the PCA, is that if we multiply it by its transpose we get the Identity Matrix “I”

P<-X.loadings

Pt<-t(X.loadings)

 
P%*%Pt = I

 
In the “I” matrix, its diagonal is “1”, and “0” values for all the rest cells indicating that all the loadings are orthogonal between them.

Exercise:
  • Check it by yourself and take out the diagonal from the P matrix.
  • Represent in a graphic the first loadings:
    • 1 vs 2      : a plane
    • 1, 2 and 3: a cube
 

19 abr. 2017

How to load a REP file in a MOSAIC LOCAL Prediction Model

If we use the MONITOR in Win ISI or a LOCAL Prediction Model in ISI Scan, there is a field to load the REP file (is a ".nir" which include the variation we want to minimize in the model, like the temperature, differences between instruments, differences between the pathlengths of the gold reflectors,….). This way the LOCAL uses the REP file when developing the calibration.

In MOSAIC the REP file must be load in a different way.

As usual we load the ".RED" file, reduced with the appropriate math-treatment, we set the maximum and minimum number of factors and samples,...., but where I load the repeatability file (.NIR) .

😏...Easy but tricky.

Rename the extension from the repeatability file from ".NIR" to ".REP", and give to this file the same name than the ".RED" file; put them both in the same folder. Now when you import the ".RED" file to the LOCAL Prediction Model, the ".REP" file will go with it. Just check it on the Links tab of the LOCAL P.M.
 
As you know something similar happens when whe load a ".EQA" and load also the ".PCA" and ".LIB" files

Thanks to Montse for testing this feature...😉

24 mar. 2017

Tutorials with Resemble (Part 3 - orthoProjection)


Using orthoProjection:
One of the different functions of Resemble is “orthoProjection” and we can use it with different options. Let check in this post the simplest one:
oP<-orthoProjection(Xr=der.Xr, X2 = NULL,
                    Yu = NULL,method = "pca",
                    pcSelection = list("cumvar",0.99),
                    center = TRUE, scaled = FALSE,
                    cores = 1)
 We can use the training data from the previous post, with the SG filter (just for smoothing) and the first derivative: der.Xr
The method we use is “pca”, so we don´t have to use the reference data “Yr”. We don´t use any additional set so X2=NULL
The number of terms will explain a cumulative variance of 99%.
We center the spectra, and we don´t scale it.
Now run this script in R (be sure that the package Resemble is loaded, library(resemble))

Now we can check the values we get:
names(oP)
[1] "scores" "X.loadings" "variance" "sc.sdv" "n.components"
[6] "pcSelection" "center" "scale" "method"
 

 >attach(oP)
>scores
Matrix T of scores
>X.loadings
Matrix P of Loadings
>Variance
We can see the eigenvalue, the cumulative and explained variance
>sc.sdv
eigenvalues
>n.components
Number of terms chosen to explain 99% of the variance
>pcSelection
cumvar  0,99
>center
average spectrum
>scale
1
>method
pca(svd)

Check all these values and matrices.
3.1.......Practice plotting the average spectrum. (page Exercises)
3.2.......Play with the accumulative variance.     (page Exercises)
3.3.......Plot the loadings.                                 (page Exercises)
3.4.......Plot combinations of score Maps            (page Exercises)

¡And enjoy Chemometrics with R!

23 mar. 2017

Tutorials with Resemble (part 2)

If you have practise with the post : Tutorials with Resemble (part 1) , you can continue adding more script following the recomendations of the Resemble Package. This time we can add another math treatment to the previous one of the SG filter.
Once applied the "sg" function, we can calculate the first derivative to define better he variance in the spectra. The Resemble Manual show us how to convert the spectra to a first derivative using  differences. We can do it for the calibration and the validation sets:

der.Xr <- t(diff(t(Xr), lag = 1, differences = 1))
der.Xu <- t(diff(t(Xu), lag = 1, differences = 1))

In this case we lose a data point on the left of the spectra so we have to define the wavelengths to see the plot of the first derivative.

wavelength_der<-seq(1112,2488,by=2)
matplot(wavelength_der,t(der.Xr),type="l",col="black",
        xlab="Wavelength(nm)",ylab="Absorbance")

and we get this plot:

Practise doing the same for the validation set Xu and overplotting the spectra with the training set Xr.
 
Do you see significant differences?
 
 
Enjoy using Chemometrics with R.


20 mar. 2017

Tutorials with Resemble (part 1)

I see that some of you are interested in the package "Resemble", so I´m going to re-writte some of the post with this package, so we can understand better the LOCAL concept we have been treating with Win ISI.

The examples use the NIRsoil data that we can get from the package "prospectr".
require(prospectr)data(NIRsoil)
If we can plot the  raw spectra, ..., just writte this script
wavelength<-seq(1100,2498,by=2)
matplot(wavelength,t(NIRsoil$spc),type="l",col="blue",
        xlab="Wavelength(nm)",ylab="Absorbance",ylim=c(0,1))

In the Resemble manual recomends to apply a SG filter without derivatives to smooth the spectra, so in this case we proceed as the manual:
sg <- savitzkyGolay(NIRsoil$spc, p = 3, w = 11, m = 0)
NIRsoil$spc <- sg
Now the spectra is truncated in both sides, so we have to create:
wavelength_sg<-seq(1110,2488,by=2)
and we can plot the spectra filtered:
matplot(wavelength_sg,t(NIRsoil$spc ),type="l",col="black",
        xlab="Wavelength(nm)",ylab="Absorbance",ylim=c(0,1))

You won´t see too much difference with the raw spectra.

Now we split the data into a training (Xr , Yr) set and a validation set (Xu, Yu)
       #VALIDATION
Xu <- NIRsoil$spc[!as.logical(NIRsoil$train),]
Yu <- NIRsoil$CEC[!as.logical(NIRsoil$train)]    

    #TRAINING
Xr <- NIRsoil$spc[as.logical(NIRsoil$train),]   

Yr <- NIRsoil$CEC[as.logical(NIRsoil$train)]     
and we take out the data without reference values form both sets:
Xu <- Xu[!is.na(Yu),]    
Xr <- Xr[!is.na(Yr),]    
Yu <- Yu[!is.na(Yu)]     
Yr <- Yr[!is.na(Yr)]
     


Practise making plots again of the spectra of the diferent sets. Overlap training and validation sets with different colors,....., and enjoy using R for chemometrics.

6 mar. 2017

Neighborhood Mahalanobis distance matrix


Working with the chemometric packages in R help us to understand other chemometric commercial software’s better.

In Resemble we can use the function fDiss to get a matrix of distances between all the samples in a spectra data set, so we get a square and diagonal matrix with zeroes in the diagonal, because the distance between a sample and itself in the PCA space is cero. This way we can see redundant information and remove it from the spectra set. Finally we can get a well distributed cloud of samples and the average spectrum is more representative to all of them.

Here I just trim the matrix in order to see how close the first 10 samples spectra are  between them.
The spectra used was the NIRsoil data from R.


5 mar. 2017

Wheigthed Average (LOCAL)


We have seen in the post  LOCAL optimization  how, when giving a prediction, LOCAL uses all the PLS terms range we have fixed in the options Min to Max number of terms, and the result is a weighted average of all the results predictions of all the models. So to choose the right range is important to get more accurate predictions.
Looking in the Resemble R package documentation you can see some explanations about how the calculations are made:


"Weighted average pls ("wapls1"): It uses multiple models generated by multiple pls components (i.e. between a minimum and a maximum number of pls components). At each local partition the final predicted value is a weighted average of all the predicted values generated by the multiple pls models. The weight for each component is calculated as follows":
                                                    


"where s1:j  is the root mean square of the spectral residuals of the unknown (or target) sample when a total of j pls components are used and gj is the root mean square of the regression coefficients corresponding to the jth pls component (see Shenk et al., 1997 for more details).
"wapls1" is not compatible with valMethod = "loc_crossval" since the weights are computed based on the sample to be predicted at each local iteration.
by the multiple pls models".