Projections subsampling: reconstruct from even or odd projections
About
ID16 needs to reconstruct from even / odd projections (see #397 (closed) ).
With nabu it's possible (using [dataset] projections_subsampling
) to reconstruct from a subset of the projections. In this case the simplest is probably to define something like subsampling_step:beginning
(eg. 2:1
to reconstruct from odd projections).
Close #397 (closed)
To do
-
Update nabu_config
and validators -
Update ProcessConfig
and validation -
Update chunk reader -
Unit test -
Update changelog/documentation -
End-to-end reconstruction test
Notes
The implementation is not so obvious. For the record:
- When parsing a dataset,
tomoscan
build a dictionary of projections in the form{idx: data_url}
(see example below) - However for performances, it's better to read a big data subvolume rather than individual images (eg.
h5_dataset[begin:end, :, :]
rather than many calls toh5_dataset[i, :, :]
) - So nabu uses a
get_compacted_dataslices()
function which builds a minimal collection ofDataUrl(..., data_slice=(begin, end, step))
, so that only a few calls toh5_read
are done. The implementation ofget_compacted_dataslices
is not so trivial when subsampling is considered.
Example:
from nabu.io.reader import ChunkReader
from nabu.resources.dataset_analyzer import analyze_dataset
from nabu.io.utils import get_compacted_dataslices
di = analyze_dataset("/tmp/nabu_testdata_paleo/bamboo_reduced.nx")
reader_full = ChunkReader(di.projections)
reader_odd = ChunkReader(di.projections, dataset_subsampling=(2,1))
reader_even = ChunkReader(di.projections, dataset_subsampling=(2,0))
# projection indices: [26, 27, ..., 525] [551, ...., 1050]
# (the "jump" in the middle is due to the presence of a series of flats)
# subsampling-odd: [27, 29, 31, ..., 523, 525] [552, 554, 556, ..., 1048, 1050]
# subsampling-even: [26, 28, ..., 522, 524] [551, 553, ..., 1047, 1049]
then
get_compacted_dataslices(reader_full.files)
returns
{26: DataUrl(valid=True, scheme=None, file_path='/tmp/nabu_testdata_paleo/bamboo_reduced.nx', data_path='/entry0000/instrument/detector/data', data_slice=slice(26, 526, 1)),
27: DataUrl(valid=True, scheme=None, file_path='/tmp/nabu_testdata_paleo/bamboo_reduced.nx', data_path='/entry0000/instrument/detector/data', data_slice=slice(26, 526, 1)),
28: DataUrl(valid=True, scheme=None, file_path='/tmp/nabu_testdata_paleo/bamboo_reduced.nx', data_path='/entry0000/instrument/detector/data', data_slice=slice(26, 526, 1)),
29: DataUrl(valid=True, scheme=None, file_path='/tmp/nabu_testdata_paleo/bamboo_reduced.nx', data_path='/entry0000/instrument/detector/data', data_slice=slice(26, 526, 1)),
# ...
524: DataUrl(valid=True, scheme=None, file_path='/tmp/nabu_testdata_paleo/bamboo_reduced.nx', data_path='/entry0000/instrument/detector/data', data_slice=slice(26, 526, 1)),
525: DataUrl(valid=True, scheme=None, file_path='/tmp/nabu_testdata_paleo/bamboo_reduced.nx', data_path='/entry0000/instrument/detector/data', data_slice=slice(26, 526, 1)),
551: DataUrl(valid=True, scheme=None, file_path='/tmp/nabu_testdata_paleo/bamboo_reduced.nx', data_path='/entry0000/instrument/detector/data', data_slice=slice(551, 1051, 1)),
552: DataUrl(valid=True, scheme=None, file_path='/tmp/nabu_testdata_paleo/bamboo_reduced.nx', data_path='/entry0000/instrument/detector/data', data_slice=slice(551, 1051, 1)),
# ...
1049: DataUrl(valid=True, scheme=None, file_path='/tmp/nabu_testdata_paleo/bamboo_reduced.nx', data_path='/entry0000/instrument/detector/data', data_slice=slice(551, 1051, 1)),
1050: DataUrl(valid=True, scheme=None, file_path='/tmp/nabu_testdata_paleo/bamboo_reduced.nx', data_path='/entry0000/instrument/detector/data', data_slice=slice(551, 1051, 1))}
(note the "jump" in indices in the middle).
Then
slice_to_tuple = lambda s: (s.start, s.stop, s.step)
set([slice_to_tuple(u.data_slice()) for u in get_compacted_dataslices(reader_even.files, subsampling=reader_even.dataset_subsampling, begin=reader_even._files_begin_idx).values()])
# returns {(26, 526, 2), (551, 1051, 2)}
set([slice_to_tuple(u.data_slice()) for u in get_compacted_dataslices(reader_odd.files, subsampling=reader_odd.dataset_subsampling, begin=reader_odd._files_begin_idx).values()])
# returns {(27, 526, 2), (552, 1051, 2)}
Edited by Pierre Paleo