API#

lbm_caiman_python.add_processing_step(ops, step_name, input_files=None, duration_seconds=None, extra=None)[source]#

add a processing step to ops[“processing_history”].

Parameters:
opsdict

the ops dictionary to update.

step_namestr

name of the processing step.

input_fileslist of str, optional

list of input file paths.

duration_secondsfloat, optional

how long this step took.

extradict, optional

additional metadata.

Returns:
dict

the updated ops dictionary.

lbm_caiman_python.calculate_centers(A, dims)[source]#
lbm_caiman_python.cnmf_ops() dict[source]#

return only cnmf parameters.

lbm_caiman_python.combine_z_planes(results: dict)[source]#

Combines all z-planes in the results dictionary into a single estimates object.

Parameters:
results (dict): Dictionary with estimates for each z-plane.
Returns:
estimates.Estimates: Combined estimates for all z-planes.
lbm_caiman_python.compute_roi_stats(plane_dir) dict[source]#

compute roi quality statistics.

Parameters:
plane_dirstr or Path

path to plane output directory.

Returns:
dict

dictionary of roi statistics.

lbm_caiman_python.default_ops() dict[source]#

return default caiman parameters optimized for lbm microscopy data.

Returns:
dict

dictionary of parameters for motion correction and cnmf.

lbm_caiman_python.dff_rolling_percentile(F: ndarray, window_size: int = None, percentile: int = 20, smooth_window: int = None, fs: float = 30.0, tau: float = 1.0) ndarray[source]#

compute dF/F using rolling percentile baseline.

Parameters:
Fnp.ndarray

fluorescence traces, shape (n_cells, n_frames).

window_sizeint, optional

frames for rolling percentile. default: ~10*tau*fs.

percentileint, default 20

percentile for baseline F0.

smooth_windowint, optional

smoothing window for dF/F.

fsfloat, default 30.0

frame rate in Hz.

taufloat, default 1.0

decay time constant in seconds.

Returns:
np.ndarray

dF/F traces, same shape as F.

lbm_caiman_python.extract_center_square(images, size)[source]#

Extract a square crop from the center of the input images.

Parameters:
imagesnumpy.ndarray

Input array. Can be 2D (H x W) or 3D (T x H x W), where: - H is the height of the image(s). - W is the width of the image(s). - T is the number of frames (if 3D).

sizeint

The size of the square crop. The output will have dimensions (size x size) for 2D inputs or (T x size x size) for 3D inputs.

Returns:
numpy.ndarray

A square crop from the center of the input images. The returned array will have dimensions: - (size x size) if the input is 2D. - (T x size x size) if the input is 3D.

Raises:
ValueError

If images is not a NumPy array. If images is not 2D or 3D. If the specified size is larger than the height or width of the input images.

Notes

  • For 2D arrays, the function extracts a square crop directly from the center.

  • For 3D arrays, the crop is applied uniformly across all frames (T).

  • If the input dimensions are smaller than the requested size, an error will be raised.

Examples

Extract a center square from a 2D image:

>>> import numpy as np
>>> image = np.random.rand(600, 576)
>>> cropped = extract_center_square(image, size=200)
>>> cropped.shape
(200, 200)

Extract a center square from a 3D stack of images:

>>> stack = np.random.rand(100, 600, 576)
>>> cropped_stack = extract_center_square(stack, size=200)
>>> cropped_stack.shape
(100, 200, 200)
lbm_caiman_python.generate_patch_view(image: Any, pixel_resolution: float, target_patch_size: int = 40, overlap_fraction: float = 0.5)[source]#

Generate a patch visualization for a 2D image with approximately square patches of a specified size in microns. Patches are evenly distributed across the image, using calculated strides and overlaps.

Parameters:
imagendarray

A 2D NumPy array representing the input image to be divided into patches.

pixel_resolutionfloat

The pixel resolution of the image in microns per pixel.

target_patch_sizefloat, optional

The desired size of the patches in microns. Default is 40 microns.

overlap_fractionfloat, optional

The fraction of the patch size to use as overlap between patches. Default is 0.5 (50%).

Returns:
figmatplotlib.figure.Figure

A matplotlib figure containing the patch visualization.

axmatplotlib.axes.Axes

A matplotlib axes object showing the patch layout on the image.

Examples

>>> import numpy as np
>>> from matplotlib import pyplot as plt
>>> data = np.random.random((144, 600))  # Example 2D image
>>> pixel_resolution = 0.5  # Microns per pixel
>>> fig, ax = generate_patch_view(data, pixel_resolution)
>>> plt.show()
lbm_caiman_python.generate_plane_dirname(plane: int, nframes: int = None, frame_start: int = 1, frame_stop: int = None, suffix: str = None) str[source]#

generate a descriptive directory name for a plane’s outputs.

Parameters:
planeint

z-plane number (1-based)

nframesint, optional

total number of frames.

frame_startint, default 1

first frame (1-based)

frame_stopint, optional

last frame (1-based).

suffixstr, optional

additional suffix.

Returns:
str

directory name like “zplane01”, “zplane03_tp00001-05000”.

lbm_caiman_python.get_accepted_cells(plane_dir) tuple[source]#

get indices of accepted and rejected cells.

Parameters:
plane_dirstr or Path

path to plane output directory.

Returns:
tuple

(accepted_indices, rejected_indices)

lbm_caiman_python.get_contours(plane_dir, threshold: float = 0.5) list[source]#

get cell contours for visualization.

Parameters:
plane_dirstr or Path

path to plane output directory.

thresholdfloat, default 0.5

threshold for contour extraction (fraction of max).

Returns:
list

list of contour coordinates for each cell.

lbm_caiman_python.get_noise_fft(Y, noise_range=None, noise_method='logmexp', max_num_samples_fft=3072)[source]#

Compute the noise level in the Fourier domain for a given signal.

Parameters:
Yndarray

Input data array. The last dimension is treated as time.

noise_rangelist of float, optional

Frequency range to estimate noise, by default [0.25, 0.5].

noise_methodstr, optional

Method to compute the mean noise power spectral density (PSD), by default “logmexp”.

max_num_samples_fftint, optional

Maximum number of samples to use for FFT computation, by default 3072.

Returns:
tuple
  • snfloat or ndarray

    Estimated noise level.

  • psdxndarray

    Power spectral density of the input data.

lbm_caiman_python.get_single_patch_coords(dims, stride, overlap, patch_index)[source]#

Get coordinates of a single patch based on stride, overlap parameters of motion-correction.

Parameters:
dimstuple

Dimensions of the image as (rows, cols).

strideint

Number of pixels to include in each patch.

overlapint

Number of pixels to overlap between patches.

patch_indextuple

Index of the patch to return.

lbm_caiman_python.greedyROI(Y, nr=30, gSig=[5, 5], gSiz=[11, 11], nIter=5, kernel=None, nb=1, rolling_sum=False, rolling_length=100, seed_method='auto')[source]#

Greedy initialization of spatial and temporal components using spatial Gaussian filtering

Parameters:
  • Y – np.array 3d or 4d array of fluorescence data with time appearing in the last axis.

  • nr – int number of components to be found

  • gSig – scalar or list of integers standard deviation of Gaussian kernel along each axis

  • gSiz – scalar or list of integers size of spatial component

  • nIter – int number of iterations when refining estimates

  • kernel – np.ndarray User specified kernel to be used, if present, instead of Gaussian (default None)

  • nb – int Number of background components

  • rolling_max – boolean Detect new components based on a rolling sum of pixel activity (default: True)

  • rolling_length – int Length of rolling window (default: 100)

  • seed_method – str {‘auto’, ‘manual’, ‘semi’} methods for choosing seed pixels ‘semi’ detects nr components automatically and allows to add more manually if running as notebook ‘semi’ and ‘manual’ require a backend that does not inline figures, e.g. %matplotlib tk

Returns:

np.array

2d array of size (# of pixels) x nr with the spatial components. Each column is ordered columnwise (matlab format, order=’F’)

C: np.array

2d array of size nr X T with the temporal components

center: np.array

2d array of size nr x 2 [ or 3] with the components centroids

Return type:

A

Author:
Eftychios A. Pnevmatikakis and Andrea Giovannucci based on a matlab implementation by Yuanjun Gao

Simons Foundation, 2015

lbm_caiman_python.load_ops(ops_input) dict[source]#

load ops from file or return dict as-is.

Parameters:
ops_inputstr, Path, dict, or None

path to ops.npy file, dictionary, or None.

Returns:
dict

ops dictionary.

lbm_caiman_python.load_planar_results(plane_dir) dict[source]#

load all results from a plane directory.

Parameters:
plane_dirstr or Path

path to plane output directory.

Returns:
dict

dictionary containing ops, estimates, F, dff, etc.

lbm_caiman_python.mcorr_ops() dict[source]#

return only motion correction parameters.

lbm_caiman_python.norm_minmax(images)[source]#

Normalize a NumPy array to the range [0, 1].

lbm_caiman_python.pipeline(input_data, save_path: str | Path = None, ops: dict = None, planes: list | int = None, roi_mode: int = None, force_mcorr: bool = False, force_cnmf: bool = False, num_timepoints: int = None, reader_kwargs: dict = None, writer_kwargs: dict = None) list[source]#

unified caiman processing pipeline.

auto-detects 3d (single plane) vs 4d (volume) input and delegates to run_plane or run_volume accordingly.

Parameters:
input_datastr, Path, list, or lazy array

input data source (file, directory, list of files, or array).

save_pathstr or Path, optional

output directory.

opsdict, optional

caiman parameters. uses default_ops() if not provided.

planesint or list, optional

planes to process (1-based index).

roi_modeint, optional

roi mode for scanimage data (None=stitch, 0=split, N=single).

force_mcorrbool, default False

force re-run motion correction.

force_cnmfbool, default False

force re-run cnmf.

num_timepointsint, optional

limit number of frames to process.

reader_kwargsdict, optional

arguments for mbo_utilities.imread.

writer_kwargsdict, optional

arguments for writing.

Returns:
list[Path]

list of paths to ops.npy files.

lbm_caiman_python.run_plane(input_data, save_path: str | Path = None, ops: dict = None, force_mcorr: bool = False, force_cnmf: bool = False, plane_name: str = None, reader_kwargs: dict = None, writer_kwargs: dict = None) Path[source]#

process a single imaging plane using caiman.

runs motion correction and cnmf, generates diagnostic plots.

Parameters:
input_datastr, Path, or array

input data (file path or array).

save_pathstr or Path, optional

output directory.

opsdict, optional

caiman parameters.

force_mcorrbool, default False

force motion correction.

force_cnmfbool, default False

force cnmf.

plane_namestr, optional

custom name for output directory.

reader_kwargsdict, optional

arguments for imread.

writer_kwargsdict, optional

arguments for writing.

Returns:
Path

path to ops.npy file.

lbm_caiman_python.run_volume(input_data, save_path: str | Path = None, ops: dict = None, planes: list | int = None, force_mcorr: bool = False, force_cnmf: bool = False, reader_kwargs: dict = None, writer_kwargs: dict = None) list[source]#

process volumetric (4d: t,z,y,x) imaging data.

iterates over z-planes and calls run_plane for each.

Parameters:
input_datalist, Path, or array

input data source.

save_pathstr or Path, optional

base directory for outputs.

opsdict, optional

caiman parameters.

planeslist or int, optional

specific planes to process (1-based).

force_mcorrbool, default False

force motion correction.

force_cnmfbool, default False

force cnmf.

reader_kwargsdict, optional

arguments for imread.

writer_kwargsdict, optional

arguments for writing.

Returns:
list[Path]

list of paths to ops.npy files.

lbm_caiman_python.smooth_data(data, window_size=5)[source]#

Smooth the data using a moving average.