API#
Functions, broken down by category.
Run Suite2p#
- lbm_suite2p_python.pipeline(input_data, save_path: str | Path = None, ops: dict = None, planes: list | int = None, roi: int = None, keep_reg: bool = True, keep_raw: bool = False, force_reg: bool = False, force_detect: bool = False, dff_window_size: int = None, dff_percentile: int = 20, dff_smooth_window: int = None, save_json: bool = False, reader_kwargs: dict = None, writer_kwargs: dict = None, **kwargs) list[Path][source]#
Unified Suite2p processing pipeline for any input type.
Uses mbo_utilities.imread() to handle all supported input formats: - Raw ScanImage TIFFs (with phase correction and ROI stitching) - Processed TIFFs, Zarr, HDF5 files - Existing Suite2p binaries (.bin + ops.npy) - Directories containing supported files - Pre-loaded lazy arrays from mbo_utilities
- Parameters:
- input_datastr, Path, list, or lazy array
Input data source. Can be: - Path to a file (TIFF, Zarr, HDF5, .bin) - Path to a directory containing supported files - List of file paths (one per plane for volumetric data) - An mbo_utilities lazy array (MboRawArray, Suite2pArray, etc.)
- save_pathstr or Path, optional
Output directory for results. If None: - For file inputs: uses parent directory of input - For array inputs: raises ValueError (must be specified)
- opsdict, optional
Suite2p parameters. If None, uses default_ops() with metadata auto-populated from the input data (frame rate, pixel size, etc.).
- planesint or list, optional
Which z-planes to process (1-indexed). Options: - None: Process all planes (default) - int: Process single plane (e.g., planes=7) - list: Process specific planes (e.g., planes=[1, 5, 10])
- roiint, optional
ROI handling for multi-ROI ScanImage data: - None: Stitch all ROIs horizontally into single FOV (default) - 0: Process each ROI separately (creates separate outputs) - N > 0: Process only ROI N (1-indexed)
- keep_regbool, default True
Keep registered binary (data.bin) after processing.
- keep_rawbool, default False
Keep raw binary (data_raw.bin) after processing.
- force_regbool, default False
Force re-registration even if already complete.
- force_detectbool, default False
Force ROI detection even if stat.npy exists.
- dff_window_sizeint, optional
Window size for rolling percentile ΔF/F baseline (in frames). If None, auto-calculated as ~10 × tau × fs.
- dff_percentileint, default 20
Percentile for baseline F₀ estimation.
- dff_smooth_windowint, optional
Temporal smoothing window for dF/F traces (in frames). If None, auto-calculated as ~0.5 × tau × fs to ensure the window spans the calcium indicator decay time. Set to 1 to disable.
- save_jsonbool, default False
Save ops as JSON in addition to .npy.
- reader_kwargsdict, optional
Keyword arguments passed to mbo_utilities.imread() when loading data. Useful for controlling how raw ScanImage TIFFs are read. Common options:
fix_phasebool, default TrueApply phase correction for bidirectional scanning.
phasecorr_methodstr, default ‘mean’Phase correction method (‘mean’, ‘mode’, ‘median’).
borderint, default 3Border pixels to ignore during phase estimation.
use_fftbool, default FalseUse FFT-based subpixel phase correction.
fft_methodstr, default ‘2d’FFT method (‘1d’ or ‘2d’).
upsampleint, default 5Upsampling factor for subpixel precision.
max_offsetint, default 4Maximum phase offset to search.
- writer_kwargsdict, optional
Keyword arguments passed to mbo_utilities when writing binary files. Common options:
target_chunk_mbint, default 100Target chunk size in MB for streaming writes.
progress_callbackCallable, optionalCallback function for progress updates.
- **kwargs
Additional arguments passed to Suite2p.
- Returns:
- list[Path]
List of paths to ops.npy files for each processed plane.
See also
run_planeLower-level single-plane processing
run_volumeProcess list of files (legacy API)
grid_searchParameter optimization
Notes
Input Type Detection:
The function automatically detects input type and handles it appropriately:
Raw ScanImage TIFFs: Phase correction applied, multi-ROI stitched/split
Processed files: Loaded directly without modification
Suite2p binaries: Processed in-place if ops.npy exists
Directories: Scanned for supported files
Output Structure:
For volumetric data (multiple planes):
save_path/ ├── plane01/ │ ├── ops.npy, stat.npy, F.npy, ... │ └── data.bin (if keep_reg=True) ├── plane02/ │ └── ... └── volume_stats.npy
For multi-ROI data with roi=0:
save_path/ ├── plane01_roi1/ ├── plane01_roi2/ └── merged_mrois/ └── plane01/Metadata Flow:
When ops=None, metadata from the input is used to populate: - fs (frame rate) - dx, dy (pixel resolution) - Ly, Lx (frame dimensions)
Parameter Override Precedence:
The
force_regandforce_detectarguments take precedence overdo_registrationandroidetectvalues in the ops dict:force_reg=True→ always register, ignoringops["do_registration"]force_detect=True→ always detect, ignoringops["roidetect"]force_reg=False(default) → skip registration if already complete, even ifops["do_registration"]=1
This allows users to focus on detection parameters without worrying about the registration/detection flags in their ops dict.
Examples
Process a directory of raw ScanImage TIFFs:
>>> import lbm_suite2p_python as lsp >>> results = lsp.pipeline("D:/data/raw_tiffs", save_path="D:/results")
Process specific planes from a file:
>>> results = lsp.pipeline("D:/data/volume.zarr", planes=[1, 5, 10])
Process a pre-loaded array from mbo_utilities (e.g., from GUI):
>>> import mbo_utilities as mbo >>> arr = mbo.imread("D:/data/raw") >>> results = lsp.pipeline(arr, save_path="D:/results", roi=0) # Split ROIs
Process with custom ops:
>>> ops = {"diameter": 8, "threshold_scaling": 0.8} >>> results = lsp.pipeline("D:/data", ops=ops)
Control phase correction for raw ScanImage TIFFs:
>>> results = lsp.pipeline( ... "D:/data/raw", ... reader_kwargs={"fix_phase": True, "use_fft": True}, ... )
Disable phase correction (for already-corrected data):
>>> results = lsp.pipeline( ... "D:/data/raw", ... reader_kwargs={"fix_phase": False}, ... )
- lbm_suite2p_python.run_plane(input_path: str | Path, save_path: str | Path | None = None, ops: dict | str | Path = None, chan2_file: str | Path | None = None, keep_raw: bool = False, keep_reg: bool = True, force_reg: bool = False, force_detect: bool = False, dff_window_size: int = None, dff_percentile: int = 20, dff_smooth_window: int = None, save_json: bool = False, plane_name: str | None = None, reader_kwargs: dict = None, writer_kwargs: dict = None, **kwargs) Path[source]#
Processes a single imaging plane using suite2p, handling registration, segmentation, and plotting of results.
- Parameters:
- input_pathstr or Path
Full path to the file to process, with the file extension.
- save_pathstr or Path, optional
Root directory to save the results. A subdirectory will be created based on the input filename or plane_name parameter.
- opsdict, str or Path, optional
Path to or dict of user‐supplied ops.npy. If given, it overrides any existing or generated ops.
- chan2_filestr, optional
Path to structural / anatomical data used for registration.
- keep_rawbool, default False
If True, do not delete the raw binary (data_raw.bin) after processing.
- keep_regbool, default True
If True, keep the registered binary (data.bin) after processing.
- force_regbool, default False
If True, force a new registration even if existing shifts are found in ops.npy.
- force_detectbool, default False
If True, force ROI detection even if an existing stat.npy is present.
- dff_window_sizeint, optional
Number of frames for rolling percentile baseline in ΔF/F₀ calculation. If None (default), auto-calculated as ~10 × tau × fs based on ops values. This ensures the window spans multiple calcium transients so the percentile filter can find the baseline between events.
- dff_percentileint, default 20
Percentile to use for baseline F₀ estimation in dF/F calculation.
- dff_smooth_windowint, optional
Temporal smoothing window for dF/F traces (in frames). If None (default), auto-calculated as ~0.5 × tau × fs to emphasize transients while reducing noise. Set to 1 to disable smoothing.
- save_jsonbool, default False
If True, saves ops as a JSON file in addition to npy.
- plane_namestr, optional
Custom name for the plane subdirectory. If None, derived from input filename. Used by run_volume() to control output directory naming.
- **kwargsdict, optional
- Returns:
- Path
Path to the saved ops.npy file.
- Raises:
- FileNotFoundError
If input_tiff does not exist.
- TypeError
If save_folder is not a string.
- Exception
If plotting functions fail.
Notes
ops supplied to the function via ops_file will take precendence over previously saved ops.npy files.
Results are saved to save_path/{plane_name}/ subdirectory to keep outputs organized.
- lbm_suite2p_python.run_volume(input_files: list, save_path: str | Path = None, ops: dict | str | Path = None, keep_reg: bool = True, keep_raw: bool = False, force_reg: bool = False, force_detect: bool = False, dff_window_size: int = None, dff_percentile: int = 20, dff_smooth_window: int = None, save_json: bool = False, reader_kwargs: dict = None, writer_kwargs: dict = None, **kwargs)[source]#
Processes a full volumetric imaging dataset using Suite2p, handling plane-wise registration, segmentation, plotting, and aggregation of volumetric statistics and visualizations.
Supports planar, contiguous .zarr, tiff, suite2p .bin and automatically merges multi-ROI datasets acquired with ScanImage’s multi-ROI mode.
- Parameters:
- input_fileslist of str or Path
List of file paths, each representing a single imaging plane. Supported formats: - .tif files (e.g., “plane01.tif”, “plane02.tif”) - .bin files from mbo.imwrite (e.g., “plane01_stitched/data_raw.bin”) - .zarr files (e.g., “plane01_roi01.zarr”, “plane01_roi02.zarr”) For binary inputs, must have accompanying ops.npy in parent directory.
- save_pathstr or Path, optional
Base directory to save all outputs. If None, creates a “volume” directory in the parent of the first input file. For binary inputs with ops.npy, processing occurs in-place at the parent directory.
- opsdict or str or Path, optional
Suite2p parameters to use for each imaging plane. Can be: - Dictionary of parameters - Path to ops.npy file - None (uses defaults from default_ops())
- keep_rawbool, default False
If True, do not delete the raw binary (data_raw.bin) after processing.
- keep_regbool, default True
If True, keep the registered binary (data.bin) after processing.
- force_regbool, default False
If True, force re-registration even if refImg/meanImg/xoff exist in ops.npy.
- force_detectbool, default False
If True, force ROI detection even if stat.npy exists and is non-empty.
- dff_window_sizeint, optional
Window size for rolling percentile ΔF/F baseline (in frames). If None, auto-calculated as ~10 × tau × fs.
- dff_percentileint, default 20
Percentile to use for baseline F₀ estimation (e.g., 20 = 20th percentile).
- dff_smooth_windowint, optional
Temporal smoothing window for dF/F traces (in frames). If None, auto-calculated as ~0.5 × tau × fs to ensure the window spans the calcium indicator decay time. Set to 1 to disable.
- save_jsonbool, default False
If True, saves ops as JSON in addition to .npy format.
- **kwargs
Additional keyword arguments passed to run_plane().
- Returns:
- list of Path
List of paths to ops.npy files for each plane (or merged plane if mROI).
See also
run_planeProcess a single imaging plane
run_plane_binProcess an existing binary file through Suite2p pipeline
merge_mroisManual multi-ROI merging function
Notes
Directory Structure:
For standard single-ROI data:
save_path/ ├── plane01/ │ ├── ops.npy, stat.npy, F.npy, Fneu.npy, spks.npy, iscell.npy │ ├── data.bin (registered binary, if keep_reg=True) │ └── [visualization PNGs] ├── plane02/ │ └── ... ├── volume_stats.npy # Per-plane statistics ├── mean_volume_signal.png # Signal strength across planes └── rastermap.png # Clustered activity (if rastermap installed)
Multi-ROI Merging:
When input filenames contain “roi” (case-insensitive), e.g., “plane01_roi01.tif”, “plane01_roi02.tif”, the pipeline automatically detects multi-ROI acquisition and performs horizontal stitching after planar processing:
save_path/ ├── plane01_roi01/ # Individual ROI results │ └── [Suite2p outputs] ├── plane01_roi02/ │ └── [Suite2p outputs] ├── merged_mrois/ # Merged results (used for volumetric stats) │ ├── plane01/ │ │ ├── ops.npy # Merged ops with Lx = sum of ROI widths │ │ ├── stat.npy # Concatenated ROIs with xpix offsets applied │ │ ├── F.npy, spks.npy # Concatenated traces │ │ ├── data.bin # Horizontally stitched binary │ │ └── [merged visualizations] │ └── plane02/ │ └── ... └── [volumetric outputs as above]
The merging process: - Groups directories by plane number (e.g., “plane01_roi01”, “plane01_roi02” → “plane01”) - Horizontally concatenates images (refImg, meanImg, meanImgE, max_proj) - Adjusts stat[“xpix”] and stat[“med”] coordinates to account for horizontal offset - Concatenates fluorescence traces (F, Fneu, spks) and cell classifications (iscell) - Creates stitched binary files by horizontally stacking frames
Supported Input Scenarios:
TIFF files (standard workflow):
input_files = ["plane01.tif", "plane02.tif", "plane03.tif"] lsp.run_volume(input_files, save_path="outputs")
Binary files from interrupted processing:
input_files = [ "plane01_stitched/data_raw.bin", "plane02_stitched/data_raw.bin", ] lsp.run_volume(input_files) # Processes in-place
Multi-ROI TIFF files (automatic merging):
input_files = [ "plane01_roi01.tif", "plane01_roi02.tif", "plane02_roi01.tif", "plane02_roi02.tif", ] lsp.run_volume(input_files, save_path="outputs")
Mixed input types:
input_files = [ "plane01.tif", # New TIFF "plane02_stitched/data_raw.bin", # Existing binary ] lsp.run_volume(input_files, save_path="outputs")
- lbm_suite2p_python.grid_search(input_file: Path | str, save_path: Path | str, grid_params: dict, ops: dict = None, force_reg: bool = False, force_detect: bool = True)[source]#
Run a grid search over all combinations of Suite2p parameters.
Tests all combinations of parameters in grid_params, running run_plane for each combination and saving results to separate subdirectories.
- Parameters:
- input_filestr or Path
Path to the input data file (TIFF, Zarr, HDF5, etc.).
- save_pathstr or Path
Root directory where results will be saved. Each parameter combination gets its own subdirectory named by parameter values.
- grid_paramsdict
Dictionary mapping parameter names to lists of values to test. All combinations will be tested (Cartesian product).
- opsdict, optional
Base ops dictionary. If None, uses default_ops(). Grid parameters override values in this dictionary for each combination.
- force_regbool, default False
If True, force registration even if already done.
- force_detectbool, default True
If True, force ROI detection for each combination.
Notes
force_regandforce_detectoverride anydo_registrationorroidetectvalues in the ops dict. Users don’t need to set those.Subfolder names use abbreviated parameter keys (first 3 chars) and values.
Registration is shared across combinations when force_reg=False.
For Suite2p parameters, see: https://suite2p.readthedocs.io/en/latest/settings.html
Examples
>>> import lbm_suite2p_python as lsp >>> >>> # Search detection parameters >>> lsp.grid_search( ... input_file="data/plane07.zarr", ... save_path="results/grid_search", ... grid_params={ ... "threshold_scaling": [0.8, 1.0, 1.2], ... "diameter": [6, 8], ... }, ... ) >>> >>> # Search Cellpose parameters >>> lsp.grid_search( ... input_file="data/plane07.zarr", ... save_path="results/cellpose_search", ... grid_params={ ... "anatomical_only": [2, 3], ... "spatial_hp_cp": [0, 0.5], ... "diameter": [6, 8], ... }, ... ops={"sparse_mode": False}, # Required for Cellpose ... )
Output structure:
results/grid_search/ ├── thr0.80_dia6/ │ ├── ops.npy, stat.npy, F.npy, ... ├── thr0.80_dia8/ ├── thr1.00_dia6/ └── ...
Load results#
- lbm_suite2p_python.load_ops(ops_input: str | Path | list[str | Path]) dict[source]#
Simple utility load a suite2p npy file
- lbm_suite2p_python.load_planar_results(ops: dict | str | Path, z_plane: list | int = None) dict[source]#
Load stat, iscell, spks files and return as a dict. Does NOT filter by valid cells, arrays contain both accepted and rejected neurons. Filter for accepted-only via
iscell_mask = iscell[:, 0].astype(bool).- Parameters:
- opsdict, str or Path
Dict of or path to the ops.npy file. Can be a fully qualified path or a directory containing ops.npy.
- z_planeint or None, optional
the z-plane index for this file. If provided, it is stored in the output.
- Returns:
- dict
Dictionary with keys: ‘F’ (fluorescence traces, n_rois x n_frames), ‘Fneu’ (neuropil fluorescence), ‘spks’ (deconvolved spikes), ‘stat’ (ROI statistics array), ‘iscell’ (classification array where column 0 is 0/1 rejected/accepted and column 1 is probability), and ‘z_plane’ (z-plane index array).
See also
lbm_suite2p_python.load_opslbm_suite2p_python.load_traces
Plot results#
- lbm_suite2p_python.plot_projection(ops, output_directory=None, fig_label=None, vmin=None, vmax=None, add_scalebar=False, proj='meanImg', display_masks=False, accepted_only=False)[source]#
- lbm_suite2p_python.plot_rastermap(spks, model, neuron_bin_size=None, fps=17, vmin=0, vmax=0.8, xmin=0, xmax=None, save_path=None, title=None, title_kwargs=None, fig_text=None)[source]#
- lbm_suite2p_python.plot_volume_signal(zstats, savepath)[source]#
Plots the mean fluorescence signal per z-plane with standard deviation error bars.
This function loads signal statistics from a .npy file and visualizes the mean fluorescence signal per z-plane, with error bars representing the standard deviation.
- Parameters:
- zstatsstr or Path
Path to the .npy file containing the volume stats. The output of get_zstats().
- savepathstr or Path
Path to save the generated figure.
Notes
The .npy file should contain structured data with plane, mean_trace, and std_trace fields.
Error bars represent the standard deviation of the fluorescence signal.
- lbm_suite2p_python.plot_traces(f, save_path: str | Path = '', cell_indices: ndarray | list[int] | None = None, fps=17.0, num_neurons=20, window=220, title='', offset=None, lw=0.5, cmap='tab10', scale_bar_unit: str = None, mask_overlap: bool = True) None[source]#
Plot stacked fluorescence traces with automatic offset and scale bars.
- Parameters:
- fndarray
2d array of fluorescence traces (n_neurons x n_timepoints).
- save_pathstr, optional
Path to save the output plot.
- fpsfloat
Sampling rate in frames per second.
- num_neuronsint
Number of neurons to display if cell_indices is None.
- windowfloat
Time window (in seconds) to display.
- titlestr
Title of the figure.
- offsetfloat or None
Vertical offset between traces; if None, computed automatically.
- lwfloat
Line width for data points.
- cmapstr
Matplotlib colormap string.
- scale_bar_unitstr, optional
Unit suffix for the vertical scale bar (e.g., “% ΔF/F₀”, “a.u.”). The numeric value is computed automatically based on the plot’s vertical scale. If None, inferred from data range.
- cell_indicesarray-like or None
Specific cell indices to plot. If provided, overrides num_neurons.
- mask_overlapbool, default True
If True, lower traces mask (occlude) traces above them, creating a layered effect where each trace has a black background.
- lbm_suite2p_python.plot_execution_time(filepath, savepath)[source]#
Plots the execution time for each processing step per z-plane.
This function loads execution timing data from a .npy file and visualizes the runtime of different processing steps as a stacked bar plot with a black background.
- Parameters:
- filepathstr or Path
Path to the .npy file containing the volume timing stats.
- savepathstr or Path
Path to save the generated figure.
Notes
The .npy file should contain structured data with plane, registration, detection, extraction, classification, deconvolution, and total_plane_runtime fields.
Post-Processing#
- lbm_suite2p_python.dff_rolling_percentile(f_trace, window_size: int = None, percentile: int = 20, use_median_floor: bool = False, smooth_window: int = None, fs: float = None, tau: float = None)[source]#
Compute ΔF/F₀ using a rolling percentile baseline.
- Parameters:
- f_tracenp.ndarray
(N_neurons, N_frames) fluorescence traces.
- window_sizeint, optional
Size of the rolling window for baseline estimation (in frames). If None, auto-calculated as ~10 × tau × fs (default: 300 frames).
- percentileint, default 20
Percentile to use for baseline F₀ estimation.
- use_median_floorbool, default False
Set a minimum F₀ floor at 1% of the median fluorescence.
- smooth_windowint, optional
Size of temporal smoothing window (in frames) applied after dF/F. If None, auto-calculated as ~0.5 × tau × fs to emphasize transients while reducing noise. Set to 0 or 1 to disable smoothing.
- fsfloat, optional
Frame rate in Hz. Used to auto-calculate window sizes if tau is provided.
- taufloat, optional
Calcium indicator decay time constant in seconds (e.g., 1.0 for GCaMP6s). Used to auto-calculate window sizes if fs is provided.
- Returns:
- dffnp.ndarray
(N_neurons, N_frames) ΔF/F₀ traces.
Notes
Window size recommendations: - Baseline window (~10 × tau × fs): Should span multiple transients so the
percentile filter can find baseline between events.
Smooth window (~0.5 × tau × fs): Should be shorter than typical transients to preserve them while averaging out noise.
For GCaMP6s (tau ≈ 1.0s) at 30 Hz: - window_size ≈ 300 frames (10 seconds) - smooth_window ≈ 15 frames (0.5 seconds)
For GCaMP6f (tau ≈ 0.4s) at 30 Hz: - window_size ≈ 120 frames (4 seconds) - smooth_window ≈ 6 frames (0.2 seconds)
- lbm_suite2p_python.dff_median_filter(f_trace)[source]#
Compute ΔF/F₀ using a rolling median filter baseline.
- lbm_suite2p_python.dff_shot_noise(dff, fr)[source]#
Estimate the shot noise level of calcium imaging traces.
This metric quantifies the noise level based on frame-to-frame differences, assuming slow calcium dynamics compared to the imaging frame rate. It was introduced by Rupprecht et al. (2021) [1] as a standardized method for comparing noise levels across datasets with different acquisition parameters.
The noise level \(\nu\) is computed as:
\[\nu = \frac{\mathrm{median}_t\left( \left| \Delta F/F_{t+1} - \Delta F/F_t \right| \right)}{\sqrt{f_r}}\]- where
\(\Delta F/F_t\) is the fluorescence trace at time \(t\)
\(f_r\) is the imaging frame rate (in Hz).
- Parameters:
- dffnp.ndarray
Array of shape (n_neurons, n_frames), containing raw \(\Delta F/F\) traces (percent units, without neuropil subtraction).
- frfloat
Frame rate of the recording in Hz.
- Returns:
- np.ndarray
Noise level \(\nu\) for each neuron, expressed in %/√Hz units.
Notes
The metric relies on the slow dynamics of calcium signals compared to frame rate.
Higher values of \(\nu\) indicate higher shot noise.
Units are % divided by √Hz, and while unconventional, they enable comparison across frame rates.
References
- [1] Rupprecht et al., “Large-scale calcium imaging & noise levels”,
A Neuroscientific Blog (2021). https://gcamp6f.com/2021/10/04/large-scale-calcium-imaging-noise-levels/