Rework pre-processing stitching and start adding z stitching volume
goal
add equivalent of projection z-stitching but for volume
TODO
-
add metadata to reconstructed volumes -
tomoscan!118 (merged) is part of it
-
-
add 'z-stiching projection' equivalent classes for reconstructed volumes -
provide dedicated notebooks for volume stitching -
add a simple shift grid
shift research which looks through a range of value. -
add option to filter projection / volume for shift search -
provide a way to distribute stitching -
add a slices
parameter in the configuration / python API and tools to concatenate once the stitching has been distributed -
add option to distribute to slurm stitching -
add a slurm-cluster section to do the distribution + stitching for the user (as it is expected to be done a lot this way) ? -
add 'working_directory' field to the configuration when on slurm. Ease if no rights somewhere else... ? But issue with HDF5 out put ? -> must be used only to create the slurm configuration files -
create third part project for slurm SBATCH job -> https://gitlab.esrf.fr/tomotools/sluurp -
[ ] pre-process shift research (remotly or locally) to avoid redundancy when distributed[ ] best options would be to have a available script in nabu to handle this part especially. Then we can do whatever we want
-
[ ] make concatenation dependant of existing jobs for stitching when on slurm (using--dependency=afterok:jobid
) ?[ ] best options would be to have a available script in nabu to handle this part especially
- -> if we want the upper item done this will be done on another PR. Not sure this is of (very) high importance.
-
-
add drawing of the references for pre-processing and post-processing -
add a timeout parameter to the config file for slurm
-
-
profile it: check_stitching_perf.prof -> clearly most of the time is spend on reading...
[slurm]
partition=
mem=
n_jobs=
other_options=
resources
two notebook shave been created to help stitching volumes:
-
postprocessing_stitching.ipynb
: main entry point for executing a full post-process stitching (so on volumes). -
volume_overlap_stitcher.ipynb
: notebook to help finding best overlap parameters
command line interface & configuration file
post processing stitching can be called from
# create configuration file
nabu-stitching-config --level advanced --output nabu_stitching.conf --stitching_type z-postproc
# edit the configuration file
...
# execute stitching
nabu-stitching test_command_line/nabu_stitching.conf
configuration file
[stitching]
# section dedicated to stich parameters
# Which type of stitching to do. Must be in ('z-preproc', 'z-postproc')
type = z-postproc
# Overlap area between two scans in pixel. Can be an int or a list of int. If 'auto' will try to deduce it from the magnification and z_translations value
vertical_overlap_area_in_px = auto
# Height of the stich to apply on the overlap region. If set to 'auto' then will take the largest one possible (equal overlap height)
stitching_height_in_px = auto
# Policy to apply to compute the overlap area. Must be in ('mean', 'cosinus weights', 'linear weights', 'closest').
stitching_strategy = cosinus weights
[output]
[stitching]
# section dedicated to stich parameters
# Which type of stitching to do. Must be in ('z-preproc', 'z-postproc')
type = z-postproc
# Overlap area between two scans in pixel. Can be an int or a list of int. If 'auto' will try to deduce it from z-position (or translations) and pixel-size
vertical_overlap_area_in_px = auto
# Height of the stich to apply on the overlap region. If set to 'auto' then will take the largest one possible (equal overlap height)
stitching_height_in_px = auto
# Policy to apply to compute the overlap area. Must be in ('mean', 'cosinus weights', 'linear weights', 'closest').
stitching_strategy = cosinus weights
[output]
# section dedicated to output parameters
# output stitched volume to create (preprocess)
output_volume = hdf5:volume:stitched.hdf5?data_path=/stitched
# What to do in the case where the output file exists.
# By default, the output data is never overwritten and the process is interrupted if the file already exists.
# Set this option to 1 if you want to overwrite the output files.
overwrite_results = 1
[inputs]
# section dedicated to inputs
# Dataset to stitch together. Must be volume for z-preproc and NXtomo for z-postproc
input_datasets = hdf5:volume:WGN_01_0000_P_110_8128_D_129_0000pag_db0700_vol.hdf5?data_path=/entry0000/reconstruction,hdf5:volume:WGN_01_0000_P_110_8128_D_129_0001pag_db0700_vol.hdf5?data_path=/entry0001/reconstruction, hdf5:volume:WGN_01_0000_P_110_8128_D_129_0002pag_db0700_vol.hdf5?data_path=/entry0002/reconstruction
screenshots
@bcordonn (preprocessing)
reconstruction of lofoten done byprofile over 1000 pixels
raw materials
Edited by payno