Add max_workers to SlurmRestExecutor
SlurmRestExecutor is a pool with an API like https://docs.python.org/3/library/concurrent.futures.html#executor-objects
It has map, submit, shutdown and can be used as a context manager.
Executors like https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor add extra things which we might want to support as well.
Most notably the max_workers
argument which can be used to limit the number of slurm jobs running at the same time for a particular pool.
@wright Would adding a max_workers
argument (unlimited by default) be what you need?
Another things that is missing is a pre-launch option like ProcessPoolExecutor
has by default iirc (https://github.com/python/cpython/blob/8e2aab7ad5e1c8b3360c1e1b80ddadc0845eaa3e/Lib/concurrent/futures/process.py#L771C9-L771C26). This would mean if you do max_workers=3
that 3 slurm jobs are pre-lauched and waiting for work. When the job exits it needs to be restarted. @wright Would you need that?