file header: add DetInfo, 2 possible implementations
Enhance DetInfo e.g SerialNumber, ...
Callback (multi)for file saving when file is closed then plugin in LimaCCDs through property/attr
New callbacks: Acquisition status
when: for Xmas shutdown
CI and packaging
CMake merge for Jan 18, we must internally validate first
Conda, evaluation before end of dec.
For which version: 1.8
Roadmap for Lima 2
BinRoiFlipRot: for COD
Visualisation Silx + test-control with limagui
Processlib for multi-branch stream, for saving a different stages and online processing
new API for saving
de-couple frame acquisition and processing
Global management for both acquisition and software processing tasks, automatic mode and expert mode should be available.
Packaging and deployment
CI for LT stability and integration
Conda for packaging
Ansible for deployment
Supervisor for controlling Tango servers
Lima 3: Distributed
Multi-backend systems versus distributed lima (client-server)
candidate detector: Eiger 2M ?
Start discussion in parallel of lima 2 implementation.
Foresee a discussion meeting with Det group for lima3 plan
dev of Lima 2
De-couple Acq and processing ??
Processlib for mutli-branches ??
work on implementing Lima 3 concepts
Test platform for multi-backend systems (Distributed Lima ??)
Our tasks and next tasks
write COD position paper
Define the good architecture for Lima 3
meeting with Det group to tell us our decision concerning the Lima 2 and 3 roadmap and then list of the COD deliverables.
Architecture for Lima 3
The need for distributed Lima comes from the high frame rates generated by multi module cameras like Eiger.
After last discussion Alejandro and Sebastien proposed two different approaches.
Seb: Distributed Lima with clear separation of acquisition and processing stages
front-end stage (acquisition) should manage balancing of images or part of images on several computers to a storage area
back-end stage (processing) should manage ready images for further processing
Alejandro: Distributed Lima with mixed acquisition and processing stages
The proposed approach is based in the following principles:
Lima manages the 2D data acquisition as well as software processing
Offering an optimum use of software resources
Some modern detectors cannot be managed by a single back-end computer
Multiple back-end computers are needed for data acquisition and/or processing
In addition, Lima v2 is expected to provide:
Explicit separation between data acquisition and data processing dynamics
Data processing management must provide a flexible way to control (parallel) tasks at different speeds
Extended API for multiple data saving streams
Two main topologies have been identified in the cases needing multi-backend Lima:
Distributed-geometry: A detector composed by aggregated modules can be connected to several computers, each receiving all the frames from a sub-set of modules
Distributed-frames: The detector is connected to multiple computers, either directly or through a switching network, being able to dispatch full frames to individual computers, alternating the destination computer with a load-balancing arbitration
In a multi-backend computer environment, the following software entities are distinguished:
A Lima master/manager/coordinator entity. It provides the top-level LIMA interface, coordinating distributed detector control, data readout and software processing. It can be implemented as a stand-alone software program, or as a set of classes of library instantiated in the client(s).
One or more Lima data readers performing data acquisition. Typically there will be one of this entity for each logical connection between the detector a the back-end computers. They will generate data streams to be treated by the processing tasks.
One or more Lima data processing entities. These are an extension on the current Processlib tasks, receiving data streams generated by the receivers or by other processing tasks. Depending of its nature, a processing task can be:
3.1. Pixel-based task: can work with frames from a submodule in a distributed-geometry topology. The results of the tasks running on submodules can be aggregated to obtain the result on the full detector. This is the case of:
3.2. Frame-based task: need full frames in order generate the result, like:
Sub-module size must be a multiple of the software binning factor.
Pixel-based tasks can either run on the same back-end computer of the corresponding data receiver, or the can run in another distributed computer. Frame-based tasks need an aggregation of the sub-module frames in a distributed-geometry topology before being executed, so they are not suited to run on the same data receiver back-end computer.
Data streams: The most basic data stream is the "local memory buffers" currently implemented in the Lima processing chain. Another existing kind of data streams are data files in shared storage.
The software processing plugin API will need to be extended to:
Check if it can treat data directly coming from a receiver, or if it needs aggregation of multiple data streams
Run on a distributed computer
API for distributed readout of (potentially) intermediate streams