Commit 9b85ec12 authored by Pierre Paleo's avatar Pierre Paleo
Browse files

Update doc

parent f2fa9d89
Pipeline #20328 canceled with stage
......@@ -16,6 +16,8 @@ Submodules
nabu.resources.computations
nabu.resources.dataset_analyzer
nabu.resources.dataset_validator
nabu.resources.gpu
nabu.resources.machinesdb
nabu.resources.nabu_config
nabu.resources.processconfig
nabu.resources.tasks
......
......@@ -6,6 +6,7 @@ Subpackages
.. toctree::
nabu.app
nabu.cuda
nabu.distributed
nabu.io
......
......@@ -31,6 +31,10 @@ By design of Nabu, and thanks to the synchrotron parallel beam geometry, each wo
However, if at some point the workers need to exchange a notable amount of data, this approach becomes less practical (but achievable) ; and shared-memory solutions like MPI would be more appropriate.
### Chunk processing
The current module [nabu.app](apidoc/nabu.app) is designed to process the data by [chunks](definitions.md). If the detector is very wide (horizontally) and if there are many projections, this approach could require more memory than available.
## Why using an additional tasks representation ?
......
......@@ -65,20 +65,23 @@ class Component:
def get_backend(self, backends, default_fallback="numpy"):
"""
backends is a dictionary following this example:
backends = {
"cuda": {
"option_key": "use_cuda",
"requirement": __has_pycuda__ and __has_cufft__,
"requirement_errormsg": "pycuda and scikit-cuda must be installed",
"fallback": "numpy",
"available": True,
"priority": 1,
},
"numpy": {
...
"priority": 0,
.. code-block:: python
backends = {
"cuda": {
"option_key": "use_cuda",
"requirement": __has_pycuda__ and __has_cufft__,
"requirement_errormsg": "pycuda and scikit-cuda must be installed",
"fallback": "numpy",
"available": True,
"priority": 1,
},
"numpy": {
...
"priority": 0,
}
}
}
"""
backend = default_fallback
usable_backends = {}
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment