Selective data averaging/normalization to be able to:
Choose certain scan numbers (in one or more files/folders) for averaging (already in Pieter's script).
Sum up the spectra with different energy ranges and energy steps (interpolation, already in Pieter's script).
Choose the energy range used for averaging, plotting, normalization, fitting etc. (important, for example, if scans are made with different energy ranges and normalizing based on area).
Choose options for normalization (area, max, …).
Remove spikes (already in Pieter's script).
Include concentration correction if necessary.
Stitch together scans of separate regions. In the case of Kb and VtC spectra, we have three regions that are scanned separately, with different steps and times, but we need to plot them together. Pieter did it in his code, so it should be there.
Background creation and subtraction (different models such as linear, Voigt etc). For example, Pieter said that the most physically meaningful way to determine a background for the vtc region is to fit the Kbeta peak with some (how many?) pseudo-Voigt peaks and use the tails of the peaks as a background for the vtc peaks.
Basic peak finding and fitting would be nice to have in general (lmfit?). Just a small function to be called that would do a basic fit and a plot to quickly check peak shapes, positions etc. The more detailed fitting can then be done later.
Little functions to be called that can do some basic checks and comparisons. For example, plot the difference of two spectra (particularly if the energy range/step is not the same for these two spectra, so interpolated, as mentioned above) or derivatives (1st, 2nd) of a scan. The function would take an input of one or two (averaged) scans and make a plot.
Access to motor info (sample positions and other important positions). These should be better included in the measurement file in the future. Pieter's script has started this but has not been completed yet.
Way to quickly identify bad scans in a big collection of scans and to exclude them from the average. For example, I had a very sensitive sample that required many quick XANES scans in several (~50-100) different spots, which were then averaged. But some scans showed beam damage, so I had to exclude them. Checking all the scans manually is ok if there aren't too many scans (a few hundred), but if there are thousands, it gets tedious. There may be many ways to do such a check, but one way could be to first average all the scans, then compare each individual scan to the average (least squares) and basically get an R2 value for each scan. Then the function would just print out the R2 values for each scan in descending order and the user could check the scans that deviate most from the average. This should work if the percentage of bad scans is low so that most of the scans are similar to the average.
Option to save the averaged/processed data to a file (basic .txt or some other format).