LimaGroup issueshttps://gitlab.esrf.fr/groups/limagroup/-/issues2022-07-18T17:10:35+02:00https://gitlab.esrf.fr/limagroup/lima2/-/issues/125Implement Trigger Modes2022-07-18T17:10:35+02:00Samuel DebionneImplement Trigger Modes- [ ] Software
- [ ] External
- [ ] Gate (exposition given by the pulse)
With an additional parameter `nb_frames_per_trigger`.
- `Software && nb_frames_per_trigger == nb_frames` is equivalent to "Internal trigger"
- `Software && nb_fra...- [ ] Software
- [ ] External
- [ ] Gate (exposition given by the pulse)
With an additional parameter `nb_frames_per_trigger`.
- `Software && nb_frames_per_trigger == nb_frames` is equivalent to "Internal trigger"
- `Software && nb_frames_per_trigger == 1` is equivalent to "Internal trigger multi"
- `External && nb_frames_per_trigger == nb_frames` is equivalent to "External trigger"
- `External && nb_frames_per_trigger == 1` is equivalent to "External trigger multi"
More combinations are possible with this configuration.Samuel DebionneSamuel Debionnehttps://gitlab.esrf.fr/limagroup/lima2/-/issues/164Implement HDF5 metadata from Detector Info2023-01-12T14:53:31+01:00Samuel DebionneImplement HDF5 metadata from Detector InfoHDF5 metadata are currently filled with dummy values.HDF5 metadata are currently filled with dummy values.ID29 restarthttps://gitlab.esrf.fr/limagroup/lima2/-/issues/159Make the Receiver Allocator Aware2023-01-12T14:53:58+01:00Samuel DebionneMake the Receiver Allocator AwareStarting with the `simulator`:
```
class receiver
{
receiver(std::pmr::polymorphic_allocator allocator = std::allocator());
...
```
then the allocator should be used to allocate the `data_t` provided to `on_frame_ready()` callback.Starting with the `simulator`:
```
class receiver
{
receiver(std::pmr::polymorphic_allocator allocator = std::allocator());
...
```
then the allocator should be used to allocate the `data_t` provided to `on_frame_ready()` callback.ID29 restarthttps://gitlab.esrf.fr/limagroup/lima2/-/issues/156Compression on writer_sparse datasets not supported2022-12-05T15:20:54+01:00Samuel DebionneCompression on writer_sparse datasets not supportedThe compression setting is exposed to the user but not properly supported, aka only compression=`none` works.
In the `writer_sparse`, the `dcpl` is defined and used for the radius-1d and `radius_mask` datasets but those are written with...The compression setting is exposed to the user but not properly supported, aka only compression=`none` works.
In the `writer_sparse`, the `dcpl` is defined and used for the radius-1d and `radius_mask` datasets but those are written without chunking, resulting in:
```
H5Dint.c line 1319 in H5D__create(): filters can only be used with chunked layout
```https://gitlab.esrf.fr/limagroup/lima2/-/issues/149Add support for init_params per device2023-07-21T14:33:49+02:00Samuel DebionneAdd support for init_params per device`plugins_params` is currently a device property of the control device that is broadcasted to each/every devices that belong to the app.
Multiple options:
- One config file with all the init params as `plugins_params` (a la PSI)
- a vec...`plugins_params` is currently a device property of the control device that is broadcasted to each/every devices that belong to the app.
Multiple options:
- One config file with all the init params as `plugins_params` (a la PSI)
- a vector plugin params with configuration for each devices (a la Smartpix)
- each device has it's own `plugin_params`, aka not broadcasted anymore (a la Sam :-))
@ponsard has a PR ready to implement option 2. @alejandro.homs what do you think?https://gitlab.esrf.fr/limagroup/lima2/-/issues/147Support continuous / infinite acquisition2023-07-21T14:34:13+02:00Samuel DebionneSupport continuous / infinite acquisitionBliss `timescan`s fails because `io_hdf5_node` does not support a number of frames unknown at creation time.Bliss `timescan`s fails because `io_hdf5_node` does not support a number of frames unknown at creation time.Samuel DebionneSamuel Debionnehttps://gitlab.esrf.fr/limagroup/lima/-/issues/122Follow-up from "Fix HDF5 parallel saving": formalize SaveContainer::Handler2020-09-03T16:51:41+02:00Alejandro Homs PuronFollow-up from "Fix HDF5 parallel saving": formalize SaveContainer::HandlerThe following discussion from !167 should be addressed:
- [ ] @alejandro.homs started a [discussion](https://gitlab.esrf.fr/limagroup/lima/-/merge_requests/167#note_77915):
> The problem is fixed for `HDF5`, but other `SaveContain...The following discussion from !167 should be addressed:
- [ ] @alejandro.homs started a [discussion](https://gitlab.esrf.fr/limagroup/lima/-/merge_requests/167#note_77915):
> The problem is fixed for `HDF5`, but other `SaveContainer`s are not state-less, like `EDF`.
>
> By formalizing `SaveContainer::Handler` we avoid this kind of issuesv1.10.0Alejandro Homs PuronAlejandro Homs Puronhttps://gitlab.esrf.fr/limagroup/lima/-/issues/116Data saving with HDF5 shows low performance2020-06-10T12:01:44+02:00Alejandro Homs PuronData saving with HDF5 shows low performanceWhen performing acquisitions with the Dectris/Eiger2 under X-ray generator photons at ID22, the effective data saving performance is very low:
> The current illumination scheme generate compressed images of ~450 kByte. Under this regim...When performing acquisitions with the Dectris/Eiger2 under X-ray generator photons at ID22, the effective data saving performance is very low:
> The current illumination scheme generate compressed images of ~450 kByte. Under this regime, at 2 kHz frame rate, we found a limiting saving speed on BeeGFS with HDF5-BSLZ4 of ~450 MByte/s (when 900 MByte/s is needed). The interesting news is that we found more or less the same limit saving on one NVME drive, contrary to what we expected
https://gitlab.esrf.fr/limagroup/lima/-/issues/108Usage of [[maybe_unused]] and other C++17 attributes2020-05-20T14:48:53+02:00Samuel DebionneUsage of [[maybe_unused]] and other C++17 attributesThe following discussion from !158 should be addressed:
- [ ] @debionne started a [discussion](https://gitlab.esrf.fr/limagroup/lima/-/merge_requests/158#note_70720): (+8 comments)
> `[[maybe_unused]]` is C++17 but should be harml...The following discussion from !158 should be addressed:
- [ ] @debionne started a [discussion](https://gitlab.esrf.fr/limagroup/lima/-/merge_requests/158#note_70720): (+8 comments)
> `[[maybe_unused]]` is C++17 but should be harmless (even for MSVC) but might generate a warning... which is what you are trying to avoid.https://gitlab.esrf.fr/limagroup/lima/-/issues/94Video Data-2-Image task is not included in Lima processing state2019-07-09T16:22:00+02:00Alejandro Homs PuronVideo Data-2-Image task is not included in Lima processing state`CtVideo` must be `active` in order to update the data returned by `CtVideo.getLastImage()`. When `active`, an independent task is systematically included in the frame processing pipeline by `CtVideo::frameReady/_data_2_image`. However, ...`CtVideo` must be `active` in order to update the data returned by `CtVideo.getLastImage()`. When `active`, an independent task is systematically included in the frame processing pipeline by `CtVideo::frameReady/_data_2_image`. However, the task status is not taken into account by the acquisition state machine in `CtControl`. This is a potential point of failure because `HW buffers` can be re-allocated before the `Mapped Data's` are consumed.
A possible solution is to modify `CtControl::_calcAcqStatus` to take into account `CtVideo::m_ready_flag` (if `CtVideo::m_active_flag`), and to call it from `CtVideo::_data2image_finished`https://gitlab.esrf.fr/limagroup/lima2/-/issues/26Have configurable chunking of HDF5 file2021-06-18T15:06:56+02:00Samuel DebionneHave configurable chunking of HDF5 fileDepending on the use case and downstream data analysis, chunking must be configured in different ways to get good reading performance.Depending on the use case and downstream data analysis, chunking must be configured in different ways to get good reading performance.