what to come & optimization ?
The next release should give an idea about the processing and the 'target'. But there is still a lot of limitation on several points that should be discussed.
internal design - performances
There is several limitation regarding the internal architecture of Spectrum and Spectra. See #24
an issue regarding it has been created: #21
beta test and communication
Is it enough to create a pre-release / release and get some beta tester
Initially (when processing was made on one spectrum) each 'orange widget' was making a copy of the XASObject to insure the possibility to reprocess only within the upstream process (with a 'freeze' state).
But with the latest test using thousands of spectrum this approach seems to memory consuming.
So this copy was removed. Widget now have 'an update manually' button to avoid relaunching each computation as soon a parameter change.
If we continue on this way (no copy of the XASObject) we should insure:
- explain clearly the limitation and the reason of it
- show clearly the user that no reprocessing is possible (because the XASObject has changed since it has been received by the widget)
- the creation of a widget to create a copy in the flow (saving point)
When try to process the entire dataset (B33_xxxx) fail because of a too large memory consumption. Some numbers:
- for a spectra (dim1: 400, dim2: 200, energy: 211):
|after loading||0.421 GB|
|after normalization||1.493 GB|
|after exafs||3.898 GB|
|after k weight||4.067 GB|
|after ft||9.145 GB|
|after saving||9.141 GB|
some spectrum are not processed (especially by ft)
make sure this is expected. should we prevent / avoid to continue the processing on those spectrum ? For me no because they might be some 'unrelated' treatments and Spectrum can fail with some processing and not on others.