elektronn2.neuromancer.node_basic module

class elektronn2.neuromancer.node_basic.Node(parent, name='', print_repr=False)[source]

Bases: object

Basic node class. All neural network nodes should inherit from Node.

Parameters:
  • parent (Node or list[Node]) – The input node(s).
  • name (str) – Given name of the Node, may be an empty string.
  • print_repr (bool) – Whether to print the node representation upon initialisation.

Models are built from the interplay of Nodes to form a (directed, acyclic) computational graph.

The ELEKTRONN2 framework can be seen as an intelligent abstraction level that hides the raw theano-graph and manages the involved symbolic variables. The overall goal is the intuitive, flexible and easy creation of complicated graphs.

A Node has one or several inputs, called parent, (unless it is a source, i.e. a node where external data is feed into the graph). The inputs are node objects themselves.

Layers automatically keep track of their previous inputs, parameters, computational cost etc. This allows to compile the theano-functions without manually specifying the inputs, outputs and parameters. In the most simple case, any node, which might be part of a more complicated graph, can be called as a function (passing suitable numpy arrays):

>>> import elektronn2.neuromancer.utils
>>> inp = neuromancer.Input((batch_size, in_dim))
>>> test_data = elektronn2.neuromancer.utils.as_floatX(np.random.rand(batch_size, in_dim))
>>> out = inp(test_data)
>>> np.allclose(out, test_data)
True

At the first time the theano function is compiled and cached for re-use in future calls.

Several properties (with respect to the sub-graph the node depends on, or only from the of the node itself) These can also be looked up externally (e.g. required sources, parameter count, computational count).

The theano variable that represents the output of a node is kept in the attribute output. Subsequent Nodes must use this attribute of their inputs to perform their calculation and write the result to their own output (this happens in the method _calc_output, which is hidden because it must be called only internally at initialisation).

A divergence in the computational graph is created by passing the parent to several children as input:

>>> inp = neuromancer.Input((1,10), name='Input_1')
>>> node1 = neuromancer.ApplyFunc(inp, func1)
>>> node2 = neuromancer.ApplyFunc(inp, func2)

A convergence in the graph is created by passing several inputs to a node that performs a reduction:

>>> out = neuromancer.Concat([node1, node2])

Although the node “out” has two immediate inputs, it is detected that the required sources is only a single object:

>>> print(out.input_nodes)
Input_1

Computations that result in more than a single output for a Node must be broken apart using divergence and individual nodes for the several outputs. Alternatively the function split can be used to create two dummy nodes of the output of a previous Node by splitting along the specified axis. Note that possible redundant computations in Nodes are most likely eliminated by the theano graph optimiser.

Instructions for subclassing:

Overriding __init__:
At the very first the base class’ initialiser must be called, which just assigns the names and emtpy default values for attributes. Then node specific initialisations are made e.g. initialisation of shared parameters / weights. Finally the _finialise_init method of the base class is automatically called: This evokes the execution of the methods: _make_output, _calc_shape and self._calc_comp_cost. Each of those updates the corresponding attributes. NOTE: if a Node (except for the base Node) is subclassed and the derived calls __init__ of the base Node, this will also call _finialise_init exactly right the call to the superclass’ __init__.

For the graph serialisation and restoration to work, the following conditions must additionally be met:

  • The name of of a node’s trainable parameter in the parameter dict must be the same as the (optional) keyword used to initialise this parameter in __init__; moreover, parameters must not be initialised/shared from positional arguments.
  • When serialising only the current state of parameters is kept, parameter value arrays given for initialisation are never kept.

Depending on the purpose of the node, the latter methods and others (e.g. __repr__) must be overridden. The default behaviour of the base class is: output = input, outputs shape = input shape, computational cost = tensor size (!) …

all_children
all_computational_cost
all_extra_updates

List of the parameters updates of all parent nodes. They are tuples.

all_nontrainable_params

Dict of the trainable parameters (weights) of all parent nodes. They are theano shared variables.

all_params

Dict of the all parameters of all parent nodes. They are theano variable

all_params_count

Count of all trainable parameters in the entire sub-graph used to compute the output of this node

all_parents

List all nodes that are involved in the computation of the output of this node (incl. self). The list contains no duplicates. The return is a dict, the keys of which are the layers, the values are just all True

all_trainable_params

Dict of the trainable parameters (weights) of all parent nodes. They are theano shared variables.

feature_names
get_debug_outputs(*args)[source]
get_param_values(skip_const=False)[source]

Returns a dict that maps the values of the params. (such that they can be saved to disk)

Parameters:skip_const (bool) – whether to exclude constant parameters.
Returns:Dict that maps the values of the params.
Return type:dict
input_nodes

Contains the all parent nodes that are sources, i.e. inputs that are required to compute the result of this node.

input_tensors

The same as input_nodes but contains the theano tensor variables instead of the node objects. May be used as input to compile theano functions.

last_exec_time

Last function execution time in seconds.

local_exec_time
measure_exectime(n_samples=5, n_warmup=4, print_info=True, local=True, nonegative=True)[source]

Measure how much time the node needs for its calculation (in milliseconds).

Parameters:
  • n_samples (int) – Number of independent measurements of which the median is taken.
  • n_warmup (int) – Number of warm-up runs before each measurement (not taken into account for median calculation).
  • print_info (bool) – If True, print detailed info about measurements while running.
  • local (bool) – Only compute exec time for this node by subtracting its parents’ times.
  • nonegative (bool) – Do not return exec times smaller than zero.
Returns:

median of execution time measurements.

Return type:

np.float

param_count

Count of trainable parameters in this node

plot_theano_graph(outfile=None, compiled=True, **kwargs)[source]

Plot the execution graph of this Node’s Theano function to a file.

If “outfile” is not specified, the plot is saved in “/tmp/<user>_<name>.png”

Parameters:
  • outfile (str or None) – File name for saving the plot.
  • compiled (bool) – If True, the function is compiled before plotting.
  • kwargs – kwargs (plotting options) that get directly passed to theano.printing.pydotprint().
predict_dense(raw_img, as_uint8=False, pad_raw=False)[source]

Core function that performs the inference

Parameters:
  • raw_img (np.ndarray) – raw data in the format (ch, (z,) y, x)
  • as_uint8 (Bool) – Return class probabilites as uint8 image (scaled between 0 and 255!)
  • pad_raw (Bool) – Whether to apply padding (by mirroring) to the raw input image in order to get predictions on the full image domain.
Returns:

Predictions.

Return type:

np.ndarray

set_param_values(value_dict, skip_const=False)[source]

Sets new values for non constant parameters.

Parameters:
  • value_dict (dict) – A dict that maps values by parameter name.
  • skip_const (bool) – if dict also maps values for constants, these can be skipped, otherwise an exception is raised.
test_run(on_shape_mismatch='warn', debug_outputs=False)[source]

Test execution of this node with random (but correctly shaped) data.

Parameters:on_shape_mismatch (str) – If this is “warn”, a warning is emitted if there is a mismatch between expected and calculated output shapes.
Returns:
Return type:Debug output of the Theano function.
total_exec_time
class elektronn2.neuromancer.node_basic.Input(**kwargs)[source]

Bases: elektronn2.neuromancer.node_basic.Node

Input node

Parameters:
  • shape (list/tuple of int) – shape of input array, unspecified shapes are None
  • tags (list/tuple of strings or comma-separated string) – tags indicate which purpose the dimensions of the tensor serve. They are sometimes used to decide about reshapes. The maximal tensor has tags: “r, b, f, z, y, x, s” which denote: * r: perform recurrence along this axis * b: batch size * f: features, filters, channels * z: convolution no. 3 (slower than 1,2) * y: convolution no. 1 * x: convolution no. 2 * s: samples of the same instance (over which expectations are calculated) Unused axes are to be removed from this list, but b and f must always remain. To avoid bad memory layout, the order must not be changed. For less than 3 convolutions conv1,conv2 are preferred for performance reasons. Note that CNNs can mix nodes with 2d and 3d convolutions as 2d is a special case of 3d with filter size 1 on the respective axis. In this case conv3 should be used for the axis with smallest filter size.
  • strides
  • fov
  • dtype (str) – corresponding to numpy dtype (e.g., ‘int64’). Default is floatX from theano config
  • hardcoded_shape
  • name (str) – Node name.
  • print_repr (bool) – Whether to print the node representation upon initialisation.
elektronn2.neuromancer.node_basic.Input_like(ref, dtype=None, name='input', print_repr=True, override_f=False, hardcoded_shape=False)[source]
class elektronn2.neuromancer.node_basic.Concat(**kwargs)[source]

Bases: elektronn2.neuromancer.node_basic.Node

Node to concatenate the inputs. The inputs must have the same shape, except in the dimension corresponding to axis. This is not checked as shapes might be unspecified prior to compilation!

Parameters:
  • parent_nodes (list of Node) – Inputs to be concatenated.
  • axis (int) – Join axis.
  • name (str) – Node name.
  • print_repr (bool) – Whether to print the node representation upon initialisation.
class elektronn2.neuromancer.node_basic.ApplyFunc(**kwargs)[source]

Bases: elektronn2.neuromancer.node_basic.Node

Apply function to the input. If the function changes the output shape, this node should not be used.

Parameters:
  • parent (Node) – Input (single).
  • functor (function) – Function that acts on theano variables (e.g. theano.tensor.tanh).
  • args (tuple) – Arguments passed to functor after the input.
  • kwargs (dict) – kwargs for functor.
class elektronn2.neuromancer.node_basic.FromTensor(**kwargs)[source]

Bases: elektronn2.neuromancer.node_basic.Node

Dummy node to be used in the split-function.

Parameters:
  • tensor (T.Tensor) –
  • tensor_shape
  • tensor_parent (T.Tensor) –
  • name (str) – Node name.
  • print_repr (bool) – Whether to print the node representation upon initialisation.
elektronn2.neuromancer.node_basic.split(node, axis='f', index=None, n_out=None, strip_singleton_dims=False, name='split')[source]
class elektronn2.neuromancer.node_basic.GenericInput(**kwargs)[source]

Bases: elektronn2.neuromancer.node_basic.Node

Input node for arbitrary oject.

Parameters:
  • name (str) – Node name.
  • print_repr (bool) – Whether to print the node representation upon initialisation.
class elektronn2.neuromancer.node_basic.ValueNode(**kwargs)[source]

Bases: elektronn2.neuromancer.node_basic.Node

(Optionally) trainable value node

Parameters:
  • shape (list/tuple of int) – shape of input array, unspecified shapes are None
  • tags (list/tuple of strings or comma-separated string) –

    tags indicate which purpose the dimensions of the tensor serve. They are sometimes used to decide about reshapes. The maximal tensor has tags: “r, b, f, z, y, x, s” which denote:

    • r: perform recurrence along this axis
    • b: batch size
    • f: features, filters, channels
    • z: convolution no. 3 (slower than 1,2)
    • y: convolution no. 1
    • x: convolution no. 2
    • s: samples of the same instance (over which expectations are calculated)

    Unused axes are to be removed from this list, but b and f must always remain. To avoid bad memory layout, the order must not be changed. For less than 3 convolutions conv1,conv2 are preferred for performance reasons. Note that CNNs can mix nodes with 2d and 3d convolutions as 2d is a special case of 3d with filter size 1 on the respective axis. In this case conv3 should be used for the axis with smallest filter size.

  • strides
  • fov
  • dtype (str) – corresponding to numpy dtype (e.g., ‘int64’). Default is floatX from theano config
  • apply_train (bool) –
  • value
  • init_kwargs (dict) –
  • name (str) – Node name.
  • print_repr (bool) – Whether to print the node representation upon initialisation.
get_value()[source]
class elektronn2.neuromancer.node_basic.MultMerge(**kwargs)[source]

Bases: elektronn2.neuromancer.node_basic.Node

Node to concatenate the inputs. The inputs must have the same shape, except in the dimension corresponding to axis. This is not checked as shapes might be unspecified prior to compilation!

Parameters:
  • n1 (Node) – First input node.
  • n2 (Node) – Second input node.
  • name (str) – Node name.
  • print_repr (bool) – Whether to print the node representation upon initialisation.
class elektronn2.neuromancer.node_basic.InitialState_like(**kwargs)[source]

Bases: elektronn2.neuromancer.node_basic.Node

Parameters:
  • parent
  • override_f
  • dtype
  • name
  • print_repr
  • init_kwargs
get_value()[source]
class elektronn2.neuromancer.node_basic.Add(**kwargs)[source]

Bases: elektronn2.neuromancer.node_basic.Node

Add two nodes using theano.tensor.add.

Parameters:
  • n1 (Node) – First input node.
  • n2 (Node) – Second input node.
  • name (str) – Node name.
  • print_repr (bool) – Whether to print the node representation upon initialisation.