API#

Functions, broken down by category.

Run Suite2p#

lbm_suite2p_python.run_plane(input_path: str | Path, save_path: str | Path | None = None, ops: dict | str | Path = None, chan2_file: str | Path | None = None, keep_raw: bool = False, keep_reg: bool = True, force_reg: bool = False, force_detect: bool = False, dff_window_size: int = 300, dff_percentile: int = 20, save_json: bool = False, **kwargs) Path[source]#

Processes a single imaging plane using suite2p, handling registration, segmentation, and plotting of results.

Parameters:
input_pathstr or Path

Full path to the file to process, with the file extension.

save_pathstr or Path, optional

Directory to save the results.

opsdict, str or Path, optional

Path to or dict of user‐supplied ops.npy. If given, it overrides any existing or generated ops.

chan2_filestr, optional

Path to structural / anatomical data used for registration.

keep_rawbool, default false

if true, do not delete the raw binary (data_raw.bin) after processing.

keep_regbool, default false

if true, do not delete the registered binary (data.bin) after processing.

force_regbool, default false

if true, force a new registration even if existing shifts are found in ops.npy.

force_detectbool, default false

if true, force roi detection even if an existing stat.npy is present.

dff_window_sizeint, default 10

Size of the window for calculating dF/F traces.

dff_percentileint, default 8

Percentile to use for baseline F₀ estimation in dF/F calculation.

save_jsonbool, default True

If true, saves ops as a JSON file in addition to npy.

**kwargsdict, optional
Returns:
dict

Processed ops dictionary containing results.

Raises:
FileNotFoundError

If input_tiff does not exist.

TypeError

If save_folder is not a string.

Exception

If plotting functions fail.

Notes

  • ops supplied to the function via ops_file will take precendence over previously saved ops.npy files.

lbm_suite2p_python.run_volume(input_files: list, save_path: str | Path = None, ops: dict | str | Path = None, keep_reg: bool = True, keep_raw: bool = False, force_reg: bool = False, force_detect: bool = False, dff_window_size: int = 500, dff_percentile: int = 20, save_json: bool = False, **kwargs)[source]#

Processes a full volumetric imaging dataset using Suite2p, handling plane-wise registration, segmentation, plotting, and aggregation of volumetric statistics and visualizations.

Supports planar, contiguous .zarr, tiff, suite2p .bin and automatically merges multi-ROI datasets acquired with ScanImage’s multi-ROI mode.

Parameters:
input_fileslist of str or Path

List of file paths, each representing a single imaging plane. Supported formats: - .tif files (e.g., “plane01.tif”, “plane02.tif”) - .bin files from mbo.imwrite (e.g., “plane01_stitched/data_raw.bin”) - .zarr files (e.g., “plane01_roi01.zarr”, “plane01_roi02.zarr”) For binary inputs, must have accompanying ops.npy in parent directory.

save_pathstr or Path, optional

Base directory to save all outputs. If None, creates a “volume” directory in the parent of the first input file. For binary inputs with ops.npy, processing occurs in-place at the parent directory.

opsdict or str or Path, optional

Suite2p parameters to use for each imaging plane. Can be: - Dictionary of parameters - Path to ops.npy file - None (uses defaults from default_ops())

keep_rawbool, default False

If True, do not delete the raw binary (data_raw.bin) after processing.

keep_regbool, default True

If True, keep the registered binary (data.bin) after processing.

force_regbool, default False

If True, force re-registration even if refImg/meanImg/xoff exist in ops.npy.

force_detectbool, default False

If True, force ROI detection even if stat.npy exists and is non-empty.

dff_window_sizeint, default 500

Number of frames to use for rolling percentile baseline in ΔF/F₀ calculations.

dff_percentileint, default 20

Percentile to use for baseline F₀ estimation (e.g., 20 = 20th percentile).

save_jsonbool, default False

If True, saves ops as JSON in addition to .npy format.

**kwargs

Additional keyword arguments passed to run_plane().

Returns:
list of Path

List of paths to ops.npy files for each plane (or merged plane if mROI).

See also

run_plane

Process a single imaging plane

run_plane_bin

Process an existing binary file through Suite2p pipeline

merge_mrois

Manual multi-ROI merging function

Notes

Directory Structure:

For standard single-ROI data:

save_path/
├── plane01/
│   ├── ops.npy, stat.npy, F.npy, Fneu.npy, spks.npy, iscell.npy
│   ├── data.bin (registered binary, if keep_reg=True)
│   └── [visualization PNGs]
├── plane02/
│   └── ...
├── volume_stats.npy          # Per-plane statistics
├── mean_volume_signal.png    # Signal strength across planes
└── rastermap.png             # Clustered activity (if rastermap installed)

Multi-ROI Merging:

When input filenames contain “roi” (case-insensitive), e.g., “plane01_roi01.tif”, “plane01_roi02.tif”, the pipeline automatically detects multi-ROI acquisition and performs horizontal stitching after planar processing:

save_path/
├── plane01_roi01/           # Individual ROI results
│   └── [Suite2p outputs]
├── plane01_roi02/
│   └── [Suite2p outputs]
├── merged_mrois/            # Merged results (used for volumetric stats)
│   ├── plane01/
│   │   ├── ops.npy          # Merged ops with Lx = sum of ROI widths
│   │   ├── stat.npy         # Concatenated ROIs with xpix offsets applied
│   │   ├── F.npy, spks.npy  # Concatenated traces
│   │   ├── data.bin         # Horizontally stitched binary
│   │   └── [merged visualizations]
│   └── plane02/
│       └── ...
└── [volumetric outputs as above]

The merging process: - Groups directories by plane number (e.g., “plane01_roi01”, “plane01_roi02” → “plane01”) - Horizontally concatenates images (refImg, meanImg, meanImgE, max_proj) - Adjusts stat[“xpix”] and stat[“med”] coordinates to account for horizontal offset - Concatenates fluorescence traces (F, Fneu, spks) and cell classifications (iscell) - Creates stitched binary files by horizontally stacking frames

Supported Input Scenarios:

  1. TIFF files (standard workflow):

    input_files = ["plane01.tif", "plane02.tif", "plane03.tif"]
    lsp.run_volume(input_files, save_path="outputs")
    
  2. Binary files from interrupted processing:

    input_files = [
        "plane01_stitched/data_raw.bin",
        "plane02_stitched/data_raw.bin",
    ]
    lsp.run_volume(input_files)  # Processes in-place
    
  3. Multi-ROI TIFF files (automatic merging):

    input_files = [
        "plane01_roi01.tif", "plane01_roi02.tif",
        "plane02_roi01.tif", "plane02_roi02.tif",
    ]
    lsp.run_volume(input_files, save_path="outputs")
    
  4. Mixed input types:

    input_files = [
        "plane01.tif",                      # New TIFF
        "plane02_stitched/data_raw.bin",    # Existing binary
    ]
    lsp.run_volume(input_files, save_path="outputs")
    

Run a grid search over all combinations of the input suite2p parameters.

Parameters:
base_opsdict

Dictionary of default Suite2p ops to start from. Each parameter combination will override values in this dictionary.

grid_search_dictdict

Dictionary mapping parameter names (str) to a list of values to grid search. Each combination of values across parameters will be run once.

input_filestr or Path

Path to the input data file, currently only supports tiff.

save_rootstr or Path

Root directory where each parameter combination’s output will be saved. A subdirectory will be created for each run using a short parameter tag.

force_regbool

Whether to force suite2p registration.

force_detectbool

Whether to force suite2p detection.

Notes

Examples

>>> import lbm_suite2p_python as lsp
>>> import suite2p
>>> base_ops = suite2p.default_ops()
>>> base_ops["anatomical_only"] = 3
>>> base_ops["diameter"] = 6
>>> lsp.run_grid_search(
...     base_ops,
...     {"threshold_scaling": [1.0, 1.2], "tau": [0.1, 0.15]},
...     input_file="/mnt/data/assembled_plane_03.tiff",
...     save_root="/mnt/grid_search/"
... )

This will create the following output directory structure:

/mnt/data/grid_search/
├── thr1.00_tau0.10/
│   └── suite2p output for threshold_scaling=1.0, tau=0.1
├── thr1.00_tau0.15/
├── thr1.20_tau0.10/
└── thr1.20_tau0.15/

Load results#

lbm_suite2p_python.load_ops(ops_input: str | Path | list[str | Path]) dict[source]#

Simple utility load a suite2p npy file

lbm_suite2p_python.load_planar_results(ops: dict | str | Path, z_plane: list | int = None) dict[source]#

Load stat, iscell, spks files and return as a dict. Does NOT filter by valid cells, array contain both accepted and rejected neurons. Filter for accepted-only via f[iscell] or fneue[iscell] if needed.

Parameters:
opsdict, str or Path

Dict of or path to the ops.npy file. Can be a fully qualified path or a directory containing ops.npy.

z_planeint or None, optional

the z-plane index for this file. If provided, it is stored in the output.

Returns:
dict

dictionary with keys: - ‘F’: fluorescence traces loaded from F.npy, - ‘Fneu’: neuropil fluorescence traces loaded from Fneu.npy, - ‘spks’: spike traces loaded from spks.npy, - ‘stat’: stats loaded from stat.npy, - ‘iscell’: boolean array from iscell.npy, - ‘cellprob’: cell probability from classifier. - ‘z_plane’: an array (of shape [n_neurons,]) with the provided z_plane index.

See also

lbm_suite2p_python.load_ops
lbm_suite2p_python.load_traces

Plot results#

lbm_suite2p_python.plot_projection(ops, output_directory=None, fig_label=None, vmin=None, vmax=None, add_scalebar=False, proj='meanImg', display_masks=False, accepted_only=False)[source]#
lbm_suite2p_python.plot_rastermap(spks, model, neuron_bin_size=None, fps=17, vmin=0, vmax=0.8, xmin=0, xmax=None, save_path=None, title=None, title_kwargs=None, fig_text=None)[source]#
lbm_suite2p_python.plot_volume_signal(zstats, savepath)[source]#

Plots the mean fluorescence signal per z-plane with standard deviation error bars.

This function loads signal statistics from a .npy file and visualizes the mean fluorescence signal per z-plane, with error bars representing the standard deviation.

Parameters:
zstatsstr or Path

Path to the .npy file containing the volume stats. The output of get_zstats().

savepathstr or Path

Path to save the generated figure.

Notes

  • The .npy file should contain structured data with plane, mean_trace, and std_trace fields.

  • Error bars represent the standard deviation of the fluorescence signal.

lbm_suite2p_python.plot_traces(f, save_path: str | Path = '', cell_indices: ndarray | list[int] | None = None, fps=17.0, num_neurons=20, window=220, title='', offset=None, lw=0.5, cmap='tab10', signal_units=None) None[source]#

Plot stacked fluorescence traces with automatic offset and scale bars.

Parameters:
fndarray

2d array of fluorescence traces (n_neurons x n_timepoints).

save_pathstr, optional

Path to save the output plot.

fpsfloat

Sampling rate in frames per second.

num_neuronsint

Number of neurons to display if cell_indices is None.

windowfloat

Time window (in seconds) to display.

titlestr

Title of the figure.

offsetfloat or None

Vertical offset between traces; if None, computed automatically.

lwfloat

Line width for data points.

cmapstr

Matplotlib colormap string.

signal_unitsstr, optional

Units of fluorescence signal.

cell_indicesarray-like or None

Specific cell indices to plot. If provided, overrides num_neurons.

lbm_suite2p_python.plot_execution_time(filepath, savepath)[source]#

Plots the execution time for each processing step per z-plane.

This function loads execution timing data from a .npy file and visualizes the runtime of different processing steps as a stacked bar plot with a black background.

Parameters:
filepathstr or Path

Path to the .npy file containing the volume timing stats.

savepathstr or Path

Path to save the generated figure.

Notes

  • The .npy file should contain structured data with plane, registration, detection, extraction, classification, deconvolution, and total_plane_runtime fields.


Post-Processing#

lbm_suite2p_python.dff_rolling_percentile(f_trace, window_size=300, percentile=20, use_median_floor: bool = False)[source]#

Compute ΔF/F₀ using a rolling percentile baseline.

lbm_suite2p_python.dff_median_filter(f_trace)[source]#

Compute ΔF/F₀ using a rolling median filter baseline.

lbm_suite2p_python.dff_shot_noise(dff, fr)[source]#

Estimate the shot noise level of calcium imaging traces.

This metric quantifies the noise level based on frame-to-frame differences, assuming slow calcium dynamics compared to the imaging frame rate. It was introduced by Rupprecht et al. (2021) [1] as a standardized method for comparing noise levels across datasets with different acquisition parameters.

The noise level \(\nu\) is computed as:

\[\nu = \frac{\mathrm{median}_t\left( \left| \Delta F/F_{t+1} - \Delta F/F_t \right| \right)}{\sqrt{f_r}}\]
where
  • \(\Delta F/F_t\) is the fluorescence trace at time \(t\)

  • \(f_r\) is the imaging frame rate (in Hz).

Parameters:
dffnp.ndarray

Array of shape (n_neurons, n_frames), containing raw \(\Delta F/F\) traces (percent units, without neuropil subtraction).

frfloat

Frame rate of the recording in Hz.

Returns:
np.ndarray

Noise level \(\nu\) for each neuron, expressed in %/√Hz units.

Notes

  • The metric relies on the slow dynamics of calcium signals compared to frame rate.

  • Higher values of \(\nu\) indicate higher shot noise.

  • Units are % divided by √Hz, and while unconventional, they enable comparison across frame rates.

References

[1] Rupprecht et al., “Large-scale calcium imaging & noise levels”,

A Neuroscientific Blog (2021). https://gcamp6f.com/2021/10/04/large-scale-calcium-imaging-noise-levels/