... | ... | @@ -93,19 +93,44 @@ In addition, Lima v2 is expected to provide: |
|
|
|
|
|
Two main topologies have been identified in the cases needing multi-backend Lima:
|
|
|
|
|
|
1) Distributed-geometry: A detector composed by aggregated modules can be connected to several computers, each receiving all the frames from a sub-set of modules
|
|
|
1. Distributed-geometry: A detector composed by aggregated modules can be connected to several computers, each receiving all the frames from a sub-set of modules
|
|
|
|
|
|
2) Distributed-frames: The detector is connected to multiple computers, either directly or through a switching network, being able to dispatch full frames to individual computers, alternating the destination computer with a load-balancing arbitration
|
|
|
2. Distributed-frames: The detector is connected to multiple computers, either directly or through a switching network, being able to dispatch full frames to individual computers, alternating the destination computer with a load-balancing arbitration
|
|
|
|
|
|
In a multi-backend computer environment, the following software entities are distinguished:
|
|
|
|
|
|
1. A Lima master/manager/coordinator entity. It provides the top-level LIMA interface, coordinating distributed detector control, data readout and software processing. It can be implemented as a stand-alone software program, or as a set of classes of library instantiated in the client(s).
|
|
|
|
|
|
2. One or more Lima data readers performing data acquisition. Typically there will be one of this entity for each logical connection between the detector a the back-end computers. They will generate data streams to be treated by the processing tasks.
|
|
|
|
|
|
3. One or more Lima data processing entities. These are an extension on the current Processlib tasks, receiving data streams generated by the receivers or by other processing tasks. Depending of its nature, a processing task can be:
|
|
|
|
|
|
3.1. Pixel-based task: can work with frames from a submodule in a distributed-geometry topology. The results of the tasks running on submodules can be aggregated to obtain the result on the full detector. This is the case of:
|
|
|
* Software binning(*), RoI, flip, rotation
|
|
|
* Pixel masking, background subtraction, flat-field correction
|
|
|
* Frame accumulation
|
|
|
* RoI counters, RoI projection (RoI-2-spectrum)
|
|
|
* Sinogram, PyFAI
|
|
|
|
|
|
3.2. Frame-based task: need full frames in order generate the result, like:
|
|
|
* Beam-position-monitor
|
|
|
|
|
|
* Sub-module size must be a multiple of the software binning factor.
|
|
|
|
|
|
Pixel-based tasks can either run on the same back-end computer of the corresponding data receiver, or the can run in another distributed computer. Frame-based tasks need an aggregation of the sub-module frames in a distributed-geometry topology before being executed, so they are not suited to run on the same data receiver back-end computer.
|
|
|
|
|
|
Data streams: The most basic data stream is the "local memory buffers" currently implemented in the Lima processing chain. Another existing kind of data streams are data files in shared storage.
|
|
|
|
|
|
The software processing plugin API will need to be extended to:
|
|
|
|
|
|
* Check if it can treat data directly coming from a receiver, or if it needs aggregation of multiple data streams
|
|
|
* Run on a distributed computer
|
|
|
* API for distributed readout of (potentially) intermediate streams
|
|
|
|
|
|
|
|
|
Examples:
|
|
|
|
|
|
|
|
|
Pros:
|
|
|
|
|
|
Cons:
|
... | ... | |