Commit cc3ba686 authored by GUILLOU Perceval's avatar GUILLOU Perceval
Browse files

doc update

parent e0cbef35
......@@ -207,7 +207,7 @@ class DataNodeIterator(object):
self.last_child_id[db_name] = i + 1
child_node = data_nodes.get(child_name)
if child_node is None:
pipeline.lrem("%s_children_list" % db_name, child_name)
pipeline.lrem("%s_children_list" % db_name, 0, child_name)
continue
if filter is None or child_node.type in filter:
yield child_node
......
......@@ -529,11 +529,6 @@ def cli(
else:
history_filename = os.path.join(os.environ["HOME"], history_filename)
scan_printer = ScanPrinter()
set_scan_watch_callbacks(
scan_printer._on_scan_new, scan_printer._on_scan_data, scan_printer._on_scan_end
)
# Create REPL.
repl = BlissRepl(
get_globals=get_globals,
......@@ -586,6 +581,13 @@ def embed(*args, **kwargs):
try:
cmd_line_i = cli(*args, **kwargs)
scan_printer = ScanPrinter()
set_scan_watch_callbacks(
scan_printer._on_scan_new,
scan_printer._on_scan_data,
scan_printer._on_scan_end,
)
if stop_signals:
def stop_current_task(signum, frame, exception=gevent.GreenletExit):
......
Data produced among sessions is published into [Redis](1) (RAM storage) and written to disk in a hdf5 file ([see data saving](scan_saving.md)).
Data produced among sessions is published into [Redis](1) (RAM storage) and in the same time written to disk in a hdf5 file ([see data saving](scan_saving.md)).
On Redis, data is stored for a limited period of time (1 day by default) and for a limited amount (1GB by default).
## Experiment and data files structure
......@@ -8,7 +8,7 @@ In a Bliss session, the experiment can be seen as a tree, where the trunk is the
As an example, imagine that we create a session named '**test_session**' and that we want to perform several measurements with two different samples.
The measurements will consist in scanning the samples {'**sample1**', '**sample2**'} by moving a motor '**roby**' and measuring the value of a counter '**diode**' along the scan.
The sample scan is performed with the command '**ascan(roby, 0, 10, 10, 0.1, diode)**' ([see scan commands](scan_default.md)).
The sample scan is performed with the command `ascan(roby, 0, 9, 10, 0.1, diode)` ([see scan commands](scan_default.md)).
* Start the session:
......@@ -23,21 +23,29 @@ While starting a session for the first time, a directory with the same name as t
>>> SCAN_SAVING.template = '{session}/{sample}/' # modify the data saving path template
>>> SCAN_SAVING.sample = 'sample1' # set the value of the parameter {sample}
```
* Perform a first measurement:
```py
>>> ascan(roby, 0, 10, 10, 0.1, diode)
>>> ascan(roby, 0, 9, 10, 0.1, diode)
```
* Perform a second measurement:
```py
>>> ascan(roby, 0, 10, 10, 0.1, diode)
>>> ascan(roby, 0, 9, 10, 0.1, diode)
```
* Change the data saving path for measurements on sample2:
```py
>>> SCAN_SAVING.sample = 'sample2'
```
* Perform a measurement:
```py
>>> ascan(roby, 0, 10, 10, 0.1, diode)
>>> ascan(roby, 0, 9, 10, 0.1, diode)
```
For this experiment the files structure inside the session main folder is described by the following tree:
......@@ -50,7 +58,7 @@ The measurements data can be accessed by reading the content of the hdf5 files (
## Experiment and Redis data structure
Redis stores the data in RAM memory as soon as it is produced. Therefore retrieving data from **Redis allows a fast and live access to the data**.
In parallel of the in-file data storage describe above, Redis stores the data in RAM memory as soon as it is produced. Therefore retrieving data from **Redis allows a fast and live access to the data**.
Redis stores the data as a flatten list of (key:value) pairs but hopefully the **bliss.data.node** module provides a simple interface to access the data in a structured way that reflects the session structure.
```py
......@@ -86,16 +94,100 @@ diode test_session:mnt:c:tmp:sample2:1_ascan:axis:timer:diode:diode
```
With the function **n = get_node("node_name")** *node_name* is your entry point and *n* is the associated [DataNodeContainer]() .
With the function **n.iterator.walk(wait=False)** you can iterate over all the children nodes of the node *n*.
Among the children nodes, we can distinguish two other types of node, the [ChannelDataNode]() and the [scan](). They both inherit from the *DataNodeContainer* class.
With the function `n = get_node("node_db_name")` *node_db_name* is your entry point and *n* is the associated [DataNodeContainer](scan_data_node.md#datanodecontainer).
With the function `n.iterator.walk(wait=False)` you can iterate over all the child nodes of the node *n* ([see DataNodeIterator](scan_data_node.md#datanodeiterator)).
Among the child nodes, we can distinguish two other types of node, the [ChannelDataNode](scan_data_node.md#channeldatanode) and the [scan](scan_data_node.md#scan). They both inherit from the `DataNodeContainer` class.
The experimental measures are associated to the *ChannelDataNodes*, and the scan object to the *Scan* nodes.
![Screenshot](img/data_structure.svg)
You can access any node using its full name (`db_name`):
```py
>>> cdn_roby = get_node("test_session:mnt:c:tmp:sample1:1_ascan:axis:roby")
```
### Online data analysis
The classes inheriting from the `DataNode` class provide the `iterator` method which returns a `DataNodeIterator` object.
The DataNodeIterator provides the best methods to monitor the events happening during the experiment and follow the data production.
The method `walk(filter=None, wait=True)` iterates over existing child nodes that match the `filter` argument. If `wait` is True (default), the function blocks until a new node appears. It returns the new node when it appears and then waits again for a new node.
The method `walk_events(filter=None)` walks through child nodes, just like `walk` function but waits for node events (like `EVENTS.NEW_CHILD` or `EVENTS.NEW_DATA_IN_CHANNEL`). It returns the event type and the node then waits again for next event.
```py
>>> session = get_node("test_session")
>>> def f(filter=None):
>>> """wait for any new node in the session"""
>>> for node in session.iterator.walk(filter=filter ):
>>> print(node.name,node.type)
>>> def g(filter='channel'):
>>> """wait for a new event happening in any node of the type 'channel' (ChannelDataNode) """
>>> for event_type, node in session.iterator.walk_events(filter=filter ):
>>> print(event_type, node.name, node.get(-1))
# spawn greenlets to avoid blocking
>>> g1 = gevent.spawn(f)
>>> g2 = gevent.spawn(g)
# start a scan
>>> ascan(roby,0,9,10,0.01,diode)
# the monitoring prints pop out during the scan
# (produced by f and g running in greenlets)
10_ascan scan
axis None
roby channel
event.NEW_CHILD roby None
timer None
elapsed_time channel
event.NEW_CHILD elapsed_time None
diode None
diode channel
event.NEW_CHILD diode None
event.NEW_DATA_IN_CHANNEL elapsed_time 0.0
event.NEW_DATA_IN_CHANNEL roby 0.0
event.NEW_DATA_IN_CHANNEL diode 70.0
event.NEW_DATA_IN_CHANNEL elapsed_time 0.157334566116333
event.NEW_DATA_IN_CHANNEL roby 1.0
event.NEW_DATA_IN_CHANNEL diode -57.0
event.NEW_DATA_IN_CHANNEL elapsed_time 0.33751416206359863
event.NEW_DATA_IN_CHANNEL roby 2.0
event.NEW_DATA_IN_CHANNEL diode -61.0
...
# do not forget to kill the greenlet at the end
g1.kill()
g2.kill()
```
Note: In the example above we use `node.get(-1)` to retrieve the last data value produced on this node. For more details see method `get(from_index, to_index=None)` of the [ChannelDataNode](scan_data_node.md#channeldatanode).
### Scanning & experiment sequences
In the context of an experiment or a scan procedure it exist a more convenient way to obtain the data produced by a scan.
At the shell level, it exist a global variable `SCANS` that stores all the scan objects that have been launched in this shell session.
For example `SCANS[-1]` returns the last scan object and `SCANS[-1].get_data()` returns the data of that scan.
```py
>>> SCANS
deque([Scan(number=1, name=ascan, path=/mnt/c/tmp/sample2/test_session/data.h5),
Scan(number=2, name=ascan, path=/mnt/c/tmp/sample2/test_session/data.h5)],
maxlen=20)
>>> SCANS[-1]
Scan(number=11, name=ascan, path=/mnt/c/tmp/sample2/test_session/data.h5)
>>> SCANS[-1].get_data()
{
'roby': array([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]),
'elapsed_time': array([0., 0.15733457, 0.33751416, 0.49564481, 0.6538229 , 0.80611968, 0.94998145, 1.12727523, 1.28556204, 1.4369061 ]),
'diode': array([ 70., -57., -61., -43., 89., 54., 23., -89., -87., -98.])
}
```
[1]: https://redis.io/
[2]: http://silx.org
......
This diff is collapsed.
# AcquisitionMaster
# AcquisitionDevice
# AcquisitionChain
\ No newline at end of file
# DataNode
This is the base class of the DataNodeContainer, ChannelDataNode and LimaImageChannelDataNode classes.
This object cannot have children nodes.
#### Attributes
* `db_name`: the full name of the node. Reflects the position of the node in the tree of nodes.
* `name`: the short name for the node.
* `type`: the type of the node (str).
* `parent`: the parent node.
* `info`: info about the node.
# DataNodeContainer
This class inherit from the DataNode class and can have a list of children nodes.
* `type = container`
#### Methods
* `add_children(*child)`
* `children(from_id=0, to_id=-1)`
* `last_child()`
# ChannelDataNode
This class inherit from the DataNode class and is designed to hold data.
* `type = channel`
#### Attributes
* `shape`
* `dtype`
* `alias`
#### Methods
* `get(from_index, to_index=None)`
# LimaImageChannelDataNode
#### Methods
* `get(from_index, to_index=None)`
# Scan
This class inherit from the DataNodeContainer class and is designed for scans.
* `type = scan`
# DataNodeIterator
#### Methods
* `walk(self, filter=None, wait=True, ready_event=None)`
* `walk_events( filter=None, ready_event=None)`
\ No newline at end of file
......@@ -90,6 +90,8 @@ nav:
- Alignment: scan_alignment.md
- Scans' presets: scan_presets.md
- Data saving: scan_saving.md
- Data node: scan_data_node.md
- Acquisition chain: scan_acquisition_chain.md
- BLISS scan engine explained: scan_engine.md
- Continuous scans: scan_continuous.md
- Instruments:
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment