optimize StepScanDataWatch
Possibly the reason for #2067
StepScanDataWatch
decodes all Redis data to get the number of points published for each channel before getting the data points from Redis (so lots of redundant stream reading):
File "/home/denolf/dev/bliss/bliss/scanning/scan.py", line 1215, in _device_event
self.__trigger_data_watch_callback(signal, sender, sync=True)
File "/home/denolf/dev/bliss/bliss/scanning/scan.py", line 1146, in __trigger_data_watch_callback
data_events, self.nodes, self._scan_info
File "/home/denolf/dev/bliss/bliss/scanning/scan.py", line 169, in on_scan_data
nb_points = len(channel)
File "/home/denolf/dev/bliss/bliss/data/nodes/channel.py", line 357, in __len__
evdata = self.decode_raw_events(events)
File "/home/denolf/dev/bliss/bliss/data/nodes/channel.py", line 343, in decode_raw_events
ev = ChannelDataEvent.merge(events)
Optimize by avoiding len(channel)
in StepScanDataWatch
. Check total time spend in StepScanDataWatch.on_scan_data
(excluding the callback) for this scan and session:
NEXUS_WRITER_SESSION [2]: s=loopscan(2000,1e-6,save=False)
- Before optimization: 185 sec
- After optimization: 116 sec
Further optimization requires using pipelines. However the current implementation does not allow partial data fetching with pipelines. Will be done in #2355 (closed).