Propose --max_chunk_size option
When a huge amount of memory (both CPU and GPU) is available, then processed chunks are big and this ends up in very large file (~100 GB). Such file size might be problematic.
It would be good to limit how much slices are reconstructed in a single chunk. For now the workaround is to tune --gpu_mem_fraction
and --cpu_mem_fraction
.
Edited by Pierre Paleo