elektronn2.data package

Submodules

elektronn2.data.cnndata module

class elektronn2.data.cnndata.AgentData(input_node, side_target_node, path_prefix=None, raw_files=None, skel_files=None, vec_files=None, valid_skels=None, target_vec_ix=None, target_discrete_ix=None, abs_offset=None, aniso_factor=2)[source]

Bases: elektronn2.data.cnndata.BatchCreatorImage

Load raw_cube, vec_prob_obj_cube and skelfiles + rel.offset

get_newslice(position_l, direction_il, batch_size=1, source='train', aniso=True, z_shift=0, gamma=0, grey_augment_channels=None, r_max_scale=0.9, tracing_dir_prior_c=0.5, force_dense=False, flatfield_p=0.001, scale=1.0, last_ch_max_interp=False)[source]
getbatch(batch_size=1, source='train', aniso=True, z_shift=0, gamma=0, grey_augment_channels=None, r_max_scale=0.9, tracing_dir_prior_c=0.5, force_dense=False, flatfield_p=0.001)[source]
getskel(source)[source]

Draw an example skeleton according to sampling weight on training data, or randomly on valid data

load_data()[source]
Parameters:
  • d_path/l_path (string) – Directories to load data from
  • d_files/l_files (list) – List of data/target files in <path> directory (must be in the same order!). Each list element is a tuple in the form (<Name of h5-file>, <Key of h5-dataset>)
  • cube_prios (list) – (not normalised) list of sampling weights to draw examples from the respective cubes. If None the cube sizes are taken as priorities.
  • valid_cubes (list) – List of indices for cubes (from the file-lists) to use as validation data and exclude from training, may be empty list to skip performance estimation on validation data.
read_files()[source]

Image files on disk are expected to be in order (ch,x,y,z) or (x,y,z) But image stacks are returned as (z,ch,x,y) and target as (z,x,y,) irrespective of the order in the file. If the image files have no channel this dimension is extended to a singleton dimension.

class elektronn2.data.cnndata.BatchCreatorImage(input_node, target_node=None, d_path=None, l_path=None, d_files=None, l_files=None, cube_prios=None, valid_cubes=None, border_mode='crop', aniso_factor=2, target_vec_ix=None, target_discrete_ix=None, h5stream=False, zxy=True)[source]

Bases: object

check_files()[source]

Check if file paths in the network config are available.

getbatch(batch_size=1, source='train', grey_augment_channels=None, warp=False, warp_args=None, ignore_thresh=False, force_dense=False, affinities=False, nhood_targets=False, ret_ll_mask=False)[source]

Prepares a batch by randomly sampling, shifting and augmenting patches from the data

Parameters:
  • batch_size (int) – Number of examples in batch (for CNNs often just 1)
  • source (str) – Data set to draw data from: ‘train’/’valid’
  • grey_augment_channels (list) – List of channel indices to apply grey-value augmentation to
  • warp (bool or float) – Whether warping/distortion augmentations are applied to examples (slow –> use multiprocessing). If this is a float number, warping is applied to this fraction of examples e.g. 0.5 –> every other example.
  • warp_args (dict) – Additional keyword arguments that get passed through to elektronn2.data.transformations.get_warped_slice()
  • ignore_thresh (float) – If the fraction of negative targets in an example patch exceeds this threshold, this example is discarded (Negative targets are ignored for training [but could be used for unsupervised target propagation]).
  • force_dense (bool) – If True the targets are not sub-sampled according to the CNN output strides. Dense targets requires MFP in the CNN!
  • affinities
  • nhood_targets
  • ret_ll_mask (bool) – If True additional information for reach batch example is returned. Currently implemented are two ll_mask arrays to indicate the targeting mode. The first dimension of those arrays is the batch_size!
Returns:

  • data (np.ndarray) – [bs, ch, x, y] or [bs, ch, z, y, x] for 2D and 3D CNNS
  • target (np.ndarray) – [bs, ch, x, y] or [bs, ch, z, y, x]
  • ll_mask1 (np.ndarray) – (optional) [bs, n_target]
  • ll_mask2 (np.ndarray) – (optional) [bs, n_target]

load_data()[source]
Parameters:
  • d_path/l_path (string) – Directories to load data from
  • d_files/l_files (list) – List of data/target files in <path> directory (must be in the same order!). Each list element is a tuple in the form (<Name of h5-file>, <Key of h5-dataset>)
  • cube_prios (list) – (not normalised) list of sampling weights to draw examples from the respective cubes. If None the cube sizes are taken as priorities.
  • valid_cubes (list) – List of indices for cubes (from the file-lists) to use as validation data and exclude from training, may be empty list to skip performance estimation on validation data.
read_files()[source]

Image files on disk are expected to be in order (ch,x,y,z) or (x,y,z) But image stacks are returned as (z,ch,x,y) and target as (z,x,y,) irrespective of the order in the file. If the image files have no channel this dimension is extended to a singleton dimension.

warp_cut(img, target, warp, warp_params)[source]

(Wraps elektronn2.data.transformations.get_warped_slice())

Cuts a warped slice out of the input and target arrays. The same random warping transformation is each applied to both input and target.

Warping is randomly applied with the probability defined by the warp parameter (see below).

Parameters:
  • img (np.ndarray) – Input image
  • target (np.ndarray) – Target image
  • warp (float or bool) – False/True disable/enable warping completely. If warp is a float, it is used as the ratio of inputs that should be warped. E.g. 0.5 means approx. every second call to this function actually applies warping to the image-target pair.
  • warp_params (dict) – kwargs that are passed through to elektronn2.data.transformations.get_warped_slice(). Can be empty.
Returns:

  • d (np.ndarray) – (Warped) input image slice
  • t (np.ndarray) – (Warped) target slice

warp_stats
class elektronn2.data.cnndata.GridData(*args, **kwargs)[source]

Bases: elektronn2.data.cnndata.AgentData

getbatch(**get_batch_kwargs)[source]

elektronn2.data.image module

elektronn2.data.image.make_affinities(labels, nhood=None, size_thresh=1)[source]

Construct an affinity graph from a segmentation (IDs)

Segments with ID 0 are regarded as disconnected The spatial shape of the affinity graph is the same as of seg_gt. This means that some edges are are undefined and therefore treated as disconnected. If the offsets in nhood are positive, the edges with largest spatial index are undefined.

Connected components is run on the affgraph to relabel the IDs locally.

Parameters:
  • labels (4d np.ndarray, int (any precision)) – Volumes of segmentation IDs (bs, z, y, x)
  • nhood (2d np.ndarray, int) – Neighbourhood pattern specifying the edges in the affinity graph Shape: (#edges, ndim) nhood[i] contains the displacement coordinates of edge i The number and order of edges is arbitrary
  • size_thresh (int) – Size filters for connected components, smaller objects are mapped to BG
Returns:

  • aff (5d np.ndarray int16) – Affinity graph of shape (bs, #edges, x, y, z) 1: connected, 0: disconnected
  • seg_gt – 4d np.ndarray int16 Affinity graph of shape (bs, x, y, z) Relabelling of components

elektronn2.data.image.downsample_xy(d, l, factor)[source]

Downsample by averaging :param d: data :param l: label :param factor: :return:

elektronn2.data.image.ids2barriers(ids, dilute=[True, True, True], connectivity=[True, True, True], ecs_as_barr=True, smoothen=False)[source]
elektronn2.data.image.smearbarriers(barriers, kernel=None)[source]

barriers: 3d volume (z,x,y)

elektronn2.data.image.center_cubes(cube1, cube2, crop=True)[source]

shapes (ch,x,y,z) or (x,y,z)

elektronn2.data.knossos_array module

class elektronn2.data.knossos_array.KnossosArray(path, max_ram=1000, n_preload=2, fixed_mag=1)[source]

Bases: object

Interfaces with knossos cubes, all axes are in zxy order!

cut_slice(shape, offset, out=None)[source]
n_f
preload(position, start_end=None, sync=False)[source]

preloads around position preload distance but at least to cover start-end

shape
class elektronn2.data.knossos_array.KnossosArrayMulti(path_prefix, feature_paths, max_ram=3000, n_preload=2, fixed_mag=1)[source]

Bases: elektronn2.data.knossos_array.KnossosArray

cut_slice(shape, offset, out=None)[source]
preload(position, sync=True)[source]

elektronn2.data.skeleton module

elektronn2.data.skeleton.trace_to_kzip(trace_xyz, fname)[source]
class elektronn2.data.skeleton.SkeletonMFK(aniso_scale=2, name=None, skel_num=None)[source]

Bases: object

Joints: all branches and end points / node terminations (nodes not of deg 2) Branches: Joints of degree >= 3

calc_max_dist_to_skels()[source]
static find_joints(node_list)[source]
get_closest_node(position_s)[source]
get_hull_branch_direc_cutoff(*args0, **kwargs0)[source]
get_hull_branch_dist_cutoff(*args0, **kwargs0)[source]
get_hull_points_inner(*args0, **kwargs0)[source]
get_hull_skel_direc_rel(*args0, **kwargs0)[source]
get_kdtree(*args0, **kwargs0)[source]
get_knn(*args0, **kwargs0)[source]
get_loss_and_gradient(new_position_s, cutoff_inner=0.3333333333333333, rise_factor=0.1)[source]

prediction_c (zxy) Zoned error surface: flat in inner hull (selected at cutoff_inner) constant gradient in “outer” hull towards nearest inner hull voxel gradient increasing with distance (scaled by rise_factor) for predictions outside hull

static get_scale_factor(radius, old_factor, scale_strenght)[source]
Parameters:
  • radius (predicted radius (not the true radius)) –
  • old_factor (factor by which the radius prediction and the image was scaled) –
  • scale_strenght (limits the maximal scale factor) –
Returns:

Return type:

new_factor

getbatch(prediction, scale_strenght, **get_batch_kwargs)[source]
Parameters:
  • prediction ([[new_position_c, radius, ]]) –
  • scale_strenght (limits the maximal scale factor for zoom) –
  • get_batch_kwargs
Returns:

batch

Return type:

img, target_img, target_grid, target_node

init_from_annotation(skeleton_annotatation, min_radius=None, interpolation_resolution=0.5, interpolation_order=1)[source]
interpolate_bone(bone, max_k=1, resolution=0.5)[source]
interpolate_prop(old_bone, old_prop, new_bone, discrete=False)[source]
make_grid = <elektronn2.utils.utils_basic.cache object>
map_hull(hull_points)[source]

Distances take already into account the anisotropy in z (i.e. they are true distances) But all coordinates for hulls and vectors are still pixel coordinates

plot_debug_traces(grads=True, fig=None)[source]
plot_hull(fig=None)[source]
plot_hull_inner(cutoff, fig=None)[source]
plot_radii(fig=None)[source]
plot_skel(fig=None)[source]
plot_vec(substep=15, dict_name='skel', key='direc', vec=None, fig=None)[source]
static point_potential(r, margin_scale, size, repulsion=None)[source]
sample_local_direction_iso(point, n_neighbors=6)[source]

For a point gives the local skeleton direction/orientation by fitting a line through the nearest neighbours, sign is randomly assigned

sample_skel_point(rng, joint_ratio=None)[source]
sample_tracing_direction_iso(rng, local_direction_iso, c=0.5)[source]

Sample a direction close to the local direction there is a prior so that the normalised (0,1) angle of deviation a has this distribution: p(a) = 1/N * (1-c*a), where N= 1 - c/2, tmp is the inverse cdf of this.

sample_tube_point(rng, r_max_scale=0.9, joint_ratio=None)[source]

This is skeleton node based sampling: Go to a random node, sample a random orthogonal direction go a random distance into direction (uniform over the [0, r_max_scale * local maximal radius])

save(fname)[source]
step_feedback(new_position_s, new_direction_is, pred_c, pred_features, cutoff_inner=0.3333333333333333, rise_factor=0.1)[source]
step_grid_update(grid, radius, bio)[source]
class elektronn2.data.skeleton.Trace(linked_skel=None, aniso_scale=2, max_cutoff=200, uturn_detection_k=40, uturn_detection_thresh=0.45, uturn_detection_hold=10, feature_count=7)[source]

Bases: object

Unless otherwise state all coordinates are in skeleton system (xyz) with z-axis anisotrope and all distances are in pixels (conversion to mu: 1/100)

add_offset(off)[source]
append(coord, coord_cnn=None, grad=None, features=None)[source]
append_serial(*args)[source]
avg_dist_self
avg_dist_skel
avg_seg_length
max_dist_skel
min_dist_self
min_normed_dist_self
new_cut_trace(start, stop)[source]
new_reverted_trace()[source]
plot(grads=True, skel=True, rand_color=False, fig=None)[source]
runlength
save(fname)[source]
save_to_kzip(fname)[source]
split_uturns(return_accum_pathlength=False, print_stat=False)[source]
tortuosity(start=None, end=None)[source]

elektronn2.data.tracing_utils module

class elektronn2.data.tracing_utils.Tracer(model, z_shift=0, data_source=None, bounding_box_zyx=None, trace_kwargs={'aniso_scale': 2}, modus='m', shotgun_registry=None, registry_interval=None, reference_radius=18.0)[source]

Bases: object

get_scale_factor(radius, old_factor, scale_strenght)[source]
static perturb_direciton(direc, azimuth, polar)[source]
static plot_vectors(cv, vectors, fig=None)[source]
trace(position_l, direction_il, count, gamma=0, trace_xyz=None, linked_skel=None, check_for_lost_track=True, check_for_uturn=False, check_bb=True, profile=False, info_str=None, reject_obb_traces=False, initial_scale=None)[source]

Although psoition_l is in zyx order, the returned trace_obj is in xyz order

static zeropad(a, length)[source]
class elektronn2.data.tracing_utils.CubeShape(shape, offset=None, center=None, input_excess=None, bbox_reduction=None)[source]

Bases: object

bbox_off_sh_cent(bbox_reduction=None)[source]
bbox_wrt_input()[source]
bbox_wrt_self()[source]
input_off_sh_cent(input_excess=None)[source]
shrink_off_sh_cent(amount)[source]
class elektronn2.data.tracing_utils.ShotgunRegistry(seeds_zyx, registry_extent, directions=None, debug=False, radius_discout=0.5, check_w=3, occupied_thresh=0.6, candidate_max_rel=0.75, candidate_max_min_margin=1.5)[source]

Bases: object

check(trace)[source]

Check if trace goes into masked volume. If so, find out to which trace tree this belongs and merge. Return False to stop tracing Mask seeds and volume mask by current trace’s log

W: window length to do check on

find_nearest_trace(coords_xyz)[source]

Find all other tracks that are at least as close as 1.5 minimal (relative!) distance. (compare to closest point of each track)

get_next_seed()[source]
new_trace(trace)[source]
plot_mask_vol(figure=None, adjust_tfs=False)[source]
update_mask(coords_xyz, radii, index=None)[source]

elektronn2.data.traindata module

Copyright (c) 2015 Marius Killinger, Sven Dorkenwald, Philipp Schubert All rights reserved

class elektronn2.data.traindata.Data(n_lab=None)[source]

Bases: object

Load and prepare data, Base-Obj

createCVSplit(data, label, n_folds=3, use_fold=2, shuffle=False, random_state=None)[source]
getbatch(batch_size, source='train')[source]
class elektronn2.data.traindata.MNISTData(input_node, target_node, path=None, convert2image=True, warp_on=False, shift_augment=True, center=True)[source]

Bases: elektronn2.data.traindata.Data

convert_to_image()[source]

For MNIST / flattened 2d, single-Layer, square images

static download()[source]
getbatch(batch_size, source='train')[source]
class elektronn2.data.traindata.PianoData(input_node, target_node, path='/home/mkilling/devel/data/PianoRoll/Nottingham_enc.pkl', n_tap=20, n_lab=58)[source]

Bases: elektronn2.data.traindata.Data

getbatch(batch_size, source='train')[source]
class elektronn2.data.traindata.PianoData_perc(input_node, target_node, path='/home/mkilling/devel/data/PianoRoll/Nottingham_enc.pkl', n_tap=20, n_lab=58)[source]

Bases: elektronn2.data.traindata.PianoData

getbatch(batch_size, source='train')[source]

elektronn2.data.transformations module

elektronn2.data.transformations.warp_slice(img, ps, M, target=None, target_ps=None, target_vec_ix=None, target_discrete_ix=None, last_ch_max_interp=False, ksize=0.5)[source]

Cuts a warped slice out of the input image and out of the target image. Warping is applied by multiplying the original source coordinates with the inverse of the homogeneous (forward) transformation matrix M.

“Source coordinates” (src_coords) signify the coordinates of voxels in img and target that are used to compose their respective warped versions. The idea here is that not the images themselves, but the coordinates from where they are read are warped. his allows for much higher efficiency for large image volumes because we don’t have to calculate the expensive warping transform for the whole image, but only for the voxels that we eventually want to use for the new warped image. The transformed coordinates usually don’t align to the discrete voxel grids of the original images (meaning they are not integers), so the new voxel values are obtained by linear interpolation.

Parameters:
  • img (np.ndarray) – Image array in shape (f, z, x, y)
  • ps (tuple) – (spatial only) Patch size (z, x, y) (spatial shape of the neural network’s input node)
  • M (np.ndarray) – Forward warping tansformation matrix (4x4). Must contain translations in source and target array.
  • target (np.ndarray or None) – Optional target array to be extracted in the same way.
  • target_ps (tuple) – Patch size for the target array.
  • target_vec_ix (list) – List of triples that denote vector value parts in the target array. E.g. [(0,1,2), (4,5,6)] denotes two vector fields, separated by a scalar field in channel 3.
  • last_ch_max_interp (bool) –
  • ksize (float) –
Returns:

  • img_new (np.ndarray) – Warped input image slice
  • target_new (np.ndarray or None) – Warped target image slice or None, if target is None.

elektronn2.data.transformations.get_tracing_slice(img, ps, pos, z_shift=0, aniso_factor=2, sample_aniso=True, gamma=0, scale_factor=1.0, direction_iso=None, target=None, target_ps=None, target_vec_ix=None, target_discrete_ix=None, rng=None, last_ch_max_interp=False)[source]
exception elektronn2.data.transformations.WarpingOOBError(*args, **kwargs)[source]

Bases: exceptions.ValueError

class elektronn2.data.transformations.Transform(M, position_l=None, aniso_factor=2)[source]

Bases: object

M_lin
M_lin_inv
cnn_coord2lab_coord(vec_c, add_offset_l=False)[source]
cnn_pred2lab_position(prediction_c)[source]
lab_coord2cnn_coord(vec_l)[source]
to_array()[source]
elektronn2.data.transformations.trafo_from_array(a)[source]
elektronn2.data.transformations.get_warped_slice(img, ps, aniso_factor=2, sample_aniso=True, warp_amount=1.0, lock_z=True, no_x_flip=False, perspective=False, target=None, target_ps=None, target_vec_ix=None, target_discrete_ix=None, rng=None)[source]

(Wraps elektronn2.data.transformations.warp_slice())

Generates the warping transformation parameters and composes them into a single 4D homogeneous transformation matrix M. Then this transformation is applied to img and target in the warp_slice() function and the transformed input and target image are returned.

Parameters:
  • img (np.array) – Input image
  • ps (np.array) – Patch size (spatial shape of the neural network’s input node)
  • aniso_factor (float) – Anisotropy factor that determines an additional scaling in z direction.
  • sample_aniso (bool) – Scale coordinates by 1 / aniso_factor while warping.
  • warp_amount (float) – Strength of the random warping transformation. A lower warp_amount will lead to less distorted images.
  • lock_z (bool) – Exclude z coordinates from the random warping transformations.
  • no_x_flip (bool) – Don’t flip x axis during random warping.
  • perspective (bool) – Apply perspective transformations (in addition to affine ones).
  • target (np.array) – Target image
  • target_ps (np.array) – Target patch size
  • target_vec_ix
  • target_discrete_ix
  • rng (np.random.mtrand.RandomState) – Random number generator state (obtainable by np.random.RandomState()). Passing a known state makes the random transformations reproducible.
Returns:

  • img_new (np.ndarray) – (Warped) input image slice
  • target_new (np.ndarray) – (Warped) target slice

Module contents