get_h5_value: use 'h5py_read_dataset' and avoid redefining it upper.
close #88
Merge request reports
Activity
assigned to @payno
16 17 17 18 18 19 def get_h5_value(fname, h5_path, default_ret=None): 20 19 21 with hdf5_open(fname) as fid: 20 22 try: 21 val_ptr = fid[h5_path][()] I would really prefer to keep
val_ptr = fid[h5_path][()]
rather thanval_ptr = h5py_read_dataset(fid[h5_path])
. With the former, we simply use the h5py/HDF5File API.With the latter, we use yet another layer of libraries quite artificially, for little benefit. Even worse, we saw catastrophic failures of silx and HDF5 handling in the past release, so I'd rather stay with the simplest call as possible
On the other hand:
- This was the implementation of the
# TODO: look at silx.io.utils.h5py_read_dataset might replace it
- it removes local handling of the decoding (less code to maintain). and move to something more robust in my opinion (the
h5py_read_dataset
should handle more use case if the value change with time) -
h5py_read_dataset
is already heavily used. So if it break having it at one more / one less place would not make much difference.
Regarding the bad time we had during last release I understand the reluctance, but from my point of view with it we should have something more robust / maintain with less code. (at the condition the upper code is properly handling 'converted' dataset. This I don't know).
Nevertheless I won't fight more for it. Else we can remove the
TODO
.