... | ... | @@ -13,7 +13,17 @@ The proposition in Lima2 is to let the HW provide a description of the sequence |
|
|
|
|
|
Lima2 will provide a single call to set **all** the parameters configuring the detector, the data acquisition and the processing. In the camera configuration parameters there will eventually be the image HW image transformations (binning, flip, RoI), with a well defined order provided by the hardware (which could be implemented through a read-only entry in the parameter tree). The same explicit configuration of the processing pipeline will be provided in the global configuration tree.
|
|
|
|
|
|
Helper classes in the client side can implement the Lima1 `HardAndSoft` concept, by querying first the camera for the possible HW image transformation and then inserting, if necessary, the SW tasks the processing chain in order to fulfill the user request.
|
|
|
Helper classes in the client side can implement the Lima1 `HardAndSoft` concept, by querying first the camera for the possible HW image transformation and then inserting, if necessary, in the SW tasks the processing pipeline in order to fulfill the user request.
|
|
|
|
|
|
|
|
|
# Layout vs HW reconstruction tasks |
|
|
\ No newline at end of file |
|
|
# HW layout vs reconstruction tasks
|
|
|
|
|
|
Detectors generating independent sub-images from different modules require an assembly stage to form the real image. In addition to the translation operation, the sub-images might need cropping (RoI), flip and/or rotation. Such pure-SW operations are preferred to be performed by the processing pipeline so they can benefit from optimal parallelization performance.
|
|
|
|
|
|
This information must be provided by the camera plugin so the processing pipeline can properly assemble the image. Three variants are being analyzed:
|
|
|
|
|
|
1. Include the RoI, flip and rotation information in the detector layout structure. This simplifies the camera plugin implementation because it does not need to know about the processing API. However, the camera plugin must anyway be able to describe the image transformation performed by hardware, [as mentioned before](# HW image transformations). Such `HardwareLayout` is specific to the insertion of the image into the processing pipeline, and the order of the transformation must be explicitly defined.
|
|
|
|
|
|
2. The camera plugin can provide the a description of a chain of transformations tasks in order to reconstruct the sub-images in terms of RoI, flip and/or rotation, so the Layout structure only includes the final translation to assemble the image. This can be done in a similar way the camera exposes the sequence of transformations made by the hardware.
|
|
|
|
|
|
3. The camera plugin can provide a chain of SW tasks, to be inserted in the head of the processing pipeline. These tasks can perform not only the above-mentioned image transformations, but also SDK-specific tasks like background subtraction or flat-field correction. |
|
|
\ No newline at end of file |