Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • swain-lab/aliby/aliby-mirror
  • swain-lab/aliby/alibylite
2 results
Show changes
Commits on Source (63)
Showing
with 1989 additions and 1263 deletions
## Summary ## Summary
(Summarize the bug encountered concisely) {Summarize the bug encountered concisely}
I confirm that I have (if relevant):
- [ ] Read the troubleshooting guide: https://gitlab.com/aliby/aliby/-/wikis/Troubleshooting-(basic)
- [ ] Updated aliby and aliby-baby.
- [ ] Tried the unit test.
- [ ] Tried a scaled-down version of my experiment (distributed=0, filter=0, tps=10)
- [ ] Tried re-postprocessing.
## Steps to reproduce ## Steps to reproduce
(How one can reproduce the issue - this is very important) {How one can reproduce the issue - this is very important}
- aliby version: 0.1.{...}, or if development/unreleased version, commit SHA: {...}
- platform(s):
- [ ] Jura
- [ ] Other Linux, please specify distribution and version: {...}
- [ ] MacOS, please specify version: {...}
- [ ] Windows, please specify version: {...}
- experiment ID: {...}
- Any special things you need to know about this experiment: {...}
## What is the current bug behavior? ## What is the current bug behavior?
...@@ -19,6 +35,12 @@ ...@@ -19,6 +35,12 @@
(Paste any relevant logs - please use code blocks (```) to format console output, logs, and code, as (Paste any relevant logs - please use code blocks (```) to format console output, logs, and code, as
it's very hard to read otherwise.) it's very hard to read otherwise.)
```
{PASTE YOUR ERROR MESSAGE HERE!!}
```
## Possible fixes ## Possible fixes
(If you can, link to the line of code that might be responsible for the problem) (If you can, link to the line of code that might be responsible for the problem)
...@@ -11,11 +11,12 @@ End-to-end processing of cell microscopy time-lapses. ALIBY automates segmentati ...@@ -11,11 +11,12 @@ End-to-end processing of cell microscopy time-lapses. ALIBY automates segmentati
## Quickstart Documentation ## Quickstart Documentation
Installation of [VS Studio](https://visualstudio.microsoft.com/downloads/#microsoft-visual-c-redistributable-for-visual-studio-2022) Native MacOS support for is under work, but you can use containers (e.g., Docker, Podman) in the meantime. Installation of [VS Studio](https://visualstudio.microsoft.com/downloads/#microsoft-visual-c-redistributable-for-visual-studio-2022) Native MacOS support for is under work, but you can use containers (e.g., Docker, Podman) in the meantime.
For analysing local data To analyse local data
```bash ```bash
pip install aliby # aliby[network] if you want to access an OMERO server pip install aliby
``` ```
Add any of the optional flags `omero` and `utils` (e.g., `pip install aliby[omero, utils]`). `omero` provides tools to connect with an OMERO server and `utils` provides visualisation, user interface and additional deep learning tools.
See our [installation instructions]( https://aliby.readthedocs.io/en/latest/INSTALL.html ) for more details. See our [installation instructions]( https://aliby.readthedocs.io/en/latest/INSTALL.html ) for more details.
### CLI ### CLI
...@@ -80,12 +81,18 @@ It fetches the metadata from the Image object, and uses the TilerParameters valu ...@@ -80,12 +81,18 @@ It fetches the metadata from the Image object, and uses the TilerParameters valu
```python ```python
fpath = "h5/location" fpath = "h5/location"
trap_id = 9 tile_id = 9
trange = list(range(0, 30)) trange = range(0, 10)
ncols = 8 ncols = 8
riv = remoteImageViewer(fpath) riv = remoteImageViewer(fpath)
trap_tps = riv.get_trap_timepoints(trap_id, trange, ncols) trap_tps = [riv.tiler.get_tiles_timepoint(tile_id, t) for t in trange]
# You can also access labelled traps
m_ts = riv.get_labelled_trap(tile_id=0, tps=[0])
# And plot them directly
riv.plot_labelled_trap(trap_id=0, channels=[0, 1, 2, 3], trange=range(10))
``` ```
Depending on the network speed can take several seconds at the moment. Depending on the network speed can take several seconds at the moment.
...@@ -95,8 +102,8 @@ For a speed-up: take fewer z-positions if you can. ...@@ -95,8 +102,8 @@ For a speed-up: take fewer z-positions if you can.
Alternatively, if you want to get all the traps at a given timepoint: Alternatively, if you want to get all the traps at a given timepoint:
```python ```python
timepoint = 0 timepoint = (4,6)
seg_expt.get_tiles_timepoints(timepoint, tile_size=96, channels=None, tiler.get_tiles_timepoint(timepoint, channels=None,
z=[0,1,2,3,4]) z=[0,1,2,3,4])
``` ```
......
...@@ -125,3 +125,45 @@ docker-compose stop ...@@ -125,3 +125,45 @@ docker-compose stop
Segmentation has been tested on: Mac OSX Mojave, Ubuntu 20.04 and Arch Linux. Segmentation has been tested on: Mac OSX Mojave, Ubuntu 20.04 and Arch Linux.
Data processing has been tested on all the above and Windows 11. Data processing has been tested on all the above and Windows 11.
### Detailed Windows installation
#### Create environment
Open anaconda powershell as administrator
```shell script
conda create -n devaliby2 -c conda-forge python=3.8 omero-py
conda activate devaliby2
```
#### Install poetry
You may have to specify the python executable to get this to work :
```shell script
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | C:\Users\USERNAME\Anaconda3\envs\devaliby2\python.exe -
``` Also specify full path when running poetry (there must be a way to sort this)
- Clone the repository (Assuming you have ssh properly set up)
```shell script
git clone git@gitlab.com:aliby/aliby.git
cd aliby
poetry install --all-extras
```
You may need to run the full poetry path twice - first time gave an error message, worked second time
```shell script
C:\Users\v1iclar2\AppData\Roaming\Python\Scripts\poetry install --all-extras
```
confirm installation of aliby - python...import aliby - get no error message
#### Access the virtual environment from the IDE (e.g., PyCharm)
New project
In location - navigate to the aliby folder (eg c::/Users/Public/Repos/aliby
- Select the correct python interpreter
click the interpreter name at the bottom right
click add local interpreter
on the left click conda environment
click the 3 dots to the right of the interpreter path and navigate to the python executable from the environment created above (eg C:\Users\v1iclar2\Anaconda3\envs\devaliby2\python.exe)
#### Potential Windows issues
- Sometimes the library pywin32 gives trouble, just install it using pip or conda
...@@ -4,7 +4,6 @@ ...@@ -4,7 +4,6 @@
contain the root `toctree` directive. contain the root `toctree` directive.
.. toctree:: .. toctree::
:hidden:
Home page <self> Home page <self>
Installation <INSTALL.md> Installation <INSTALL.md>
......
#+title: Input/Output Stage Dependencies
Overview of what fields are required for each consecutive step to run, and
- Registration
- Tiler
- Requires:
- None
# - Optionally:
- Produces:
- /trap_info
- Tiler
- Requires:
- None
- Produces:
- /trap_info
2022-10-10 15:31:27,350 - INFO
Swain Lab microscope experiment log file
GIT commit: e5d5e33 fix: changes to a few issues with focus control on Batman.
Microscope name: Batman
Date: 022-10-10 15:31:27
Log file path: D:\AcquisitionDataBatman\Swain Lab\Ivan\RAW DATA\2022\Oct\10-Oct-2022\pH_med_to_low00\pH_med_to_low.log
Micromanager config file: C:\Users\Public\Microscope control\Micromanager config files\Batman_python_15_4_22.cfg
Omero project: Default project
Omero tags:
Experiment details: Effect on growth and cytoplasmic pH of switch from normal pH (4.25) media to higher pH (5.69). Switching is run using the Oxygen software
-----Acquisition settings-----
2022-10-10 15:31:27,350 - INFO Image Configs:
Image config,Channel,Description,Exposure (ms), Number of Z sections,Z spacing (um),Sectioning method
brightfield1,Brightfield,Default bright field config,30,5,0.6,PIFOC
pHluorin405_0_4,pHluorin405,Phluorin excitation from 405 LED 0.4v and 10ms exposure,5,1,0.6,PIFOC
pHluorin488_0_4,GFPFast,Phluorin excitation from 488 LED 0.4v,10,1,0.6,PIFOC
cy5,cy5,Default cy5,30,1,0.6,PIFOC
Device properties:
Image config,device,property,value
pHluorin405_0_4,DTOL-DAC-1,Volts,0.4
pHluorin488_0_4,DTOL-DAC-2,Volts,0.4
cy5,DTOL-DAC-3,Volts,4
2022-10-10 15:31:27,353 - INFO
group: YST_247 field: position
Name, X, Y, Z, Autofocus offset
YST_247_001,-8968,-3319,2731.125040696934,123.25
YST_247_002,-8953,-3091,2731.3000406995416,123.25
YST_247_003,-8954,-2849,2731.600040704012,122.8
YST_247_004,-8941,-2611,2730.7750406917185,122.8
YST_247_005,-8697,-2541,2731.4500407017767,118.6
group: YST_247 field: time
start: 0
interval: 300
frames: 180
group: YST_247 field: config
brightfield1: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin405_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin488_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
cy5: 0xfffffffffffffffffffffffffffffffffffffffffffff
2022-10-10 15:31:27,356 - INFO
group: YST_1510 field: position
Name,X,Y,Z,Autofocus offset
YST_1510_001,-6450,-230,2343.300034917891,112.55
YST_1510_002,-6450,-436,2343.350034918636,112.55
YST_1510_003,-6450,-639,2344.000034928322,116.8
YST_1510_004,-6450,-831,2344.250034932047,116.8
YST_1510_005,-6848,-536,2343.3250349182636,110
group: YST_1510 field: time
start: 0
interval: 300
frames: 180
group: YST_1510 field: config
brightfield1: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin405_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin488_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
cy5: 0xfffffffffffffffffffffffffffffffffffffffffffff
2022-10-10 15:31:27,359 - INFO
group: YST_1511 field: position
Name, X, Y, Z, Autofocus offset
YST_1511_001,-10618,-1675,2716.900040484965,118.7
YST_1511_002,-10618,-1914,2717.2250404898077,122.45
YST_1511_003,-10367,-1695,2718.2500405050814,120.95
YST_1511_004,-10367,-1937,2718.8250405136496,120.95
YST_1511_005,-10092,-1757,2719.975040530786,119.45
group: YST_1511 field: time
start: 0
interval: 300
frames: 180
group: YST_1511 field: config
brightfield1: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin405_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin488_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
cy5: 0xfffffffffffffffffffffffffffffffffffffffffffff
2022-10-10 15:31:27,362 - INFO
group: YST_1512 field: position
Name,X,Y,Z,Autofocus offset
YST_1512_001,-8173,-2510,2339.0750348549336,115.65
YST_1512_002,-8173,-2718,2338.0250348392874,110.8
YST_1512_003,-8173,-2963,2336.625034818426,110.8
YST_1512_004,-8457,-2963,2336.350034814328,110.9
YST_1512_005,-8481,-2706,2337.575034832582,113.3
group: YST_1512 field: time
start: 0
interval: 300
frames: 180
group: YST_1512 field: config
brightfield1: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin405_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin488_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
cy5: 0xfffffffffffffffffffffffffffffffffffffffffffff
2022-10-10 15:31:27,365 - INFO
group: YST_1513 field: position
Name,X,Y,Z,Autofocus offset
YST_1513_001,-6978,-2596,2339.8750348668545,113.3
YST_1513_002,-6978,-2380,2340.500034876168,113.3
YST_1513_003,-6971,-2163,2340.8750348817557,113.3
YST_1513_004,-6971,-1892,2341.2500348873436,113.3
YST_1513_005,-6692,-1892,2341.550034891814,113.3
group: YST_1513 field: time
start: 0
interval: 300
frames: 180
group: YST_1513 field: config
brightfield1: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin405_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin488_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
cy5: 0xfffffffffffffffffffffffffffffffffffffffffffff
2022-10-10 15:31:27,365 - INFO
2022-10-10 15:31:27,365 - INFO
-----Experiment started-----
Source diff could not be displayed: it is too large. Options to address this: view the blob.
[tool.poetry] [tool.poetry]
name = "aliby" name = "aliby"
version = "0.1.55" version = "0.1.64"
description = "Process and analyse live-cell imaging data" description = "Process and analyse live-cell imaging data"
authors = ["Alan Munoz <alan.munoz@ed.ac.uk>"] authors = ["Alan Munoz <alan.munoz@ed.ac.uk>"]
packages = [ packages = [
...@@ -14,7 +14,7 @@ readme = "README.md" ...@@ -14,7 +14,7 @@ readme = "README.md"
[tool.poetry.scripts] [tool.poetry.scripts]
aliby-run = "aliby.bin.run:run" aliby-run = "aliby.bin.run:run"
aliby-annotate = "aliby.bin.annotate:annotate_image" aliby-annotate = "aliby.bin.annotate:annotate"
aliby-visualise = "aliby.bin.visualise:napari_overlay" aliby-visualise = "aliby.bin.visualise:napari_overlay"
[build-system] [build-system]
...@@ -45,13 +45,14 @@ tqdm = "^4.62.3" # progress bars ...@@ -45,13 +45,14 @@ tqdm = "^4.62.3" # progress bars
xmltodict = "^0.13.0" # read ome-tiff metadata xmltodict = "^0.13.0" # read ome-tiff metadata
zarr = "^2.14.0" zarr = "^2.14.0"
GitPython = "^3.1.27" GitPython = "^3.1.27"
h5py = "2.10" # File I/O
# Networking # Networking
omero-py = { version = ">=5.6.2", optional = true } # contact omero server omero-py = { version = ">=5.6.2", optional = true } # contact omero server
[tool.poetry.extras] # Baby segmentation
omero = [ "omero-py" ] aliby-baby = {version = "^0.1.17", optional=true}
# Postprocessing # Postprocessing
[tool.poetry.group.pp.dependencies] [tool.poetry.group.pp.dependencies]
...@@ -59,10 +60,9 @@ leidenalg = "^0.8.8" ...@@ -59,10 +60,9 @@ leidenalg = "^0.8.8"
more-itertools = "^8.12.0" more-itertools = "^8.12.0"
pycatch22 = "^0.4.2" pycatch22 = "^0.4.2"
[tool.poetry.group.pp]
optional = true
[tool.poetry.group.baby.dependencies]
aliby-baby = "^0.1.15"
h5py = "2.10" # File I/O
[tool.poetry.group.dev] [tool.poetry.group.dev]
optional = true optional = true
...@@ -102,12 +102,18 @@ pytest = "^6.2.5" ...@@ -102,12 +102,18 @@ pytest = "^6.2.5"
[tool.poetry.group.utils] [tool.poetry.group.utils]
optional = true optional = true
# Dependency groups can only be used by a poetry installation, not pip
[tool.poetry.group.utils.dependencies] [tool.poetry.group.utils.dependencies]
napari = ">=0.4.16" napari = {version = ">=0.4.16", optional=true}
torch = "^1.13.1" Torch = {version = "^1.13.1", optional=true}
pytorch-lightning = "^1.9.3" pytorch-lightning = {version = "^1.9.3", optional=true}
torchvision = "^0.14.1" torchvision = {version = "^0.14.1", optional=true}
trio = "^0.22.0" trio = {version = "^0.22.0", optional=true}
grid-strategy = {version = "^0.0.1", optional=true}
[tool.poetry.extras]
omero = ["omero-py"]
baby = ["aliby-baby"]
[tool.black] [tool.black]
line-length = 79 line-length = 79
......
...@@ -3,7 +3,7 @@ import typing as t ...@@ -3,7 +3,7 @@ import typing as t
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from collections.abc import Iterable from collections.abc import Iterable
from copy import copy from copy import copy
from pathlib import Path, PosixPath from pathlib import Path
from time import perf_counter from time import perf_counter
from typing import Union from typing import Union
...@@ -60,14 +60,14 @@ class ParametersABC(ABC): ...@@ -60,14 +60,14 @@ class ParametersABC(ABC):
else: else:
return iterable return iterable
def to_yaml(self, path: Union[PosixPath, str] = None): def to_yaml(self, path: Union[Path, str] = None):
""" """
Returns a yaml stream of the attributes of the class instance. Returns a yaml stream of the attributes of the class instance.
If path is provided, the yaml stream is saved there. If path is provided, the yaml stream is saved there.
Parameters Parameters
---------- ----------
path : Union[PosixPath, str] path : Union[Path, str]
Output path. Output path.
""" """
if path: if path:
...@@ -80,7 +80,7 @@ class ParametersABC(ABC): ...@@ -80,7 +80,7 @@ class ParametersABC(ABC):
return cls(**d) return cls(**d)
@classmethod @classmethod
def from_yaml(cls, source: Union[PosixPath, str]): def from_yaml(cls, source: Union[Path, str]):
""" """
Returns instance from a yaml filename or stdin Returns instance from a yaml filename or stdin
""" """
...@@ -202,7 +202,7 @@ class ProcessABC(ABC): ...@@ -202,7 +202,7 @@ class ProcessABC(ABC):
def run(self): def run(self):
pass pass
def _log(self, message: str, level: str = "warn"): def _log(self, message: str, level: str = "warning"):
# Log messages in the corresponding level # Log messages in the corresponding level
logger = logging.getLogger("aliby") logger = logging.getLogger("aliby")
getattr(logger, level)(f"{self.__class__.__name__}: {message}") getattr(logger, level)(f"{self.__class__.__name__}: {message}")
...@@ -211,7 +211,7 @@ class ProcessABC(ABC): ...@@ -211,7 +211,7 @@ class ProcessABC(ABC):
def check_type_recursive(val1, val2): def check_type_recursive(val1, val2):
same_types = True same_types = True
if not isinstance(val1, type(val2)) and not all( if not isinstance(val1, type(val2)) and not all(
type(x) in (PosixPath, str) for x in (val1, val2) # Ignore str->path type(x) in (Path, str) for x in (val1, val2) # Ignore str->path
): ):
return False return False
if not isinstance(val1, t.Iterable) and not isinstance(val2, t.Iterable): if not isinstance(val1, t.Iterable) and not isinstance(val2, t.Iterable):
...@@ -249,5 +249,5 @@ class StepABC(ProcessABC): ...@@ -249,5 +249,5 @@ class StepABC(ProcessABC):
return self._run_tp(tp, **kwargs) return self._run_tp(tp, **kwargs)
def run(self): def run(self):
# Replace run withn run_tp # Replace run with run_tp
raise Warning("Steps use run_tp instead of run") raise Warning("Steps use run_tp instead of run")
...@@ -162,5 +162,8 @@ def image_creds_from_h5(fpath: str): ...@@ -162,5 +162,8 @@ def image_creds_from_h5(fpath: str):
attrs = attrs_from_h5(fpath) attrs = attrs_from_h5(fpath)
return ( return (
attrs["image_id"], attrs["image_id"],
yaml.safe_load(attrs["parameters"])["general"]["server_info"], {
k: yaml.safe_load(attrs["parameters"])["general"][k]
for k in ("username", "password", "host")
},
) )
import logging import logging
import typing as t import typing as t
from itertools import groupby from itertools import groupby
from pathlib import Path, PosixPath from pathlib import Path
from functools import lru_cache, cached_property from functools import lru_cache, cached_property
import h5py import h5py
...@@ -26,13 +26,13 @@ class Cells: ...@@ -26,13 +26,13 @@ class Cells:
""" """
def __init__(self, filename, path="cell_info"): def __init__(self, filename, path="cell_info"):
self.filename: t.Optional[t.Union[str, PosixPath]] = filename self.filename: t.Optional[t.Union[str, Path]] = filename
self.cinfo_path: t.Optional[str] = path self.cinfo_path: t.Optional[str] = path
self._edgemasks: t.Optional[str] = None self._edgemasks: t.Optional[str] = None
self._tile_size: t.Optional[int] = None self._tile_size: t.Optional[int] = None
@classmethod @classmethod
def from_source(cls, source: t.Union[PosixPath, str]): def from_source(cls, source: t.Union[Path, str]):
return cls(Path(source)) return cls(Path(source))
def _log(self, message: str, level: str = "warn"): def _log(self, message: str, level: str = "warn"):
......
""" """
Anthology of interfaces for different parsers and lack of them. Anthology of interfaces fordispatch_metadata_parse different parsers and lack of them.
ALIBY decides on using different metadata parsers based on two elements: ALIBY decides on using different metadata parsers based on two elements:
...@@ -16,7 +16,7 @@ import logging ...@@ -16,7 +16,7 @@ import logging
import os import os
import typing as t import typing as t
from datetime import datetime from datetime import datetime
from pathlib import Path, PosixPath from pathlib import Path
import pandas as pd import pandas as pd
from pytz import timezone from pytz import timezone
...@@ -176,7 +176,7 @@ def get_meta_from_legacy(parsed_metadata: dict): ...@@ -176,7 +176,7 @@ def get_meta_from_legacy(parsed_metadata: dict):
return result return result
def parse_swainlab_metadata(filedir: t.Union[str, PosixPath]): def parse_swainlab_metadata(filedir: t.Union[str, Path]):
""" """
Dispatcher function that determines which parser to use based on the file ending. Dispatcher function that determines which parser to use based on the file ending.
...@@ -205,7 +205,7 @@ def parse_swainlab_metadata(filedir: t.Union[str, PosixPath]): ...@@ -205,7 +205,7 @@ def parse_swainlab_metadata(filedir: t.Union[str, PosixPath]):
return minimal_meta return minimal_meta
def dispatch_metadata_parser(filepath: t.Union[str, PosixPath]): def dispatch_metadata_parser(filepath: t.Union[str, Path]):
""" """
Function to dispatch different metadata parsers that convert logfiles into a Function to dispatch different metadata parsers that convert logfiles into a
basic metadata dictionary. Currently only contains the swainlab log parsers. basic metadata dictionary. Currently only contains the swainlab log parsers.
...@@ -222,7 +222,7 @@ def dispatch_metadata_parser(filepath: t.Union[str, PosixPath]): ...@@ -222,7 +222,7 @@ def dispatch_metadata_parser(filepath: t.Union[str, PosixPath]):
return parsed_meta return parsed_meta
def dir_to_meta(path: PosixPath, suffix="tiff"): def dir_to_meta(path: Path, suffix="tiff"):
filenames = list(path.glob(f"*.{suffix}")) filenames = list(path.glob(f"*.{suffix}"))
try: try:
......
...@@ -2,7 +2,7 @@ import logging ...@@ -2,7 +2,7 @@ import logging
import typing as t import typing as t
from copy import copy from copy import copy
from functools import cached_property, lru_cache from functools import cached_property, lru_cache
from pathlib import PosixPath from pathlib import Path
import bottleneck as bn import bottleneck as bn
import h5py import h5py
...@@ -11,7 +11,7 @@ import pandas as pd ...@@ -11,7 +11,7 @@ import pandas as pd
from agora.io.bridge import BridgeH5 from agora.io.bridge import BridgeH5
from agora.io.decorators import _first_arg_str_to_df from agora.io.decorators import _first_arg_str_to_df
from agora.utils.association import validate_association from agora.utils.indexing import validate_association
from agora.utils.kymograph import add_index_levels from agora.utils.kymograph import add_index_levels
from agora.utils.merge import apply_merges from agora.utils.merge import apply_merges
...@@ -23,7 +23,7 @@ class Signal(BridgeH5): ...@@ -23,7 +23,7 @@ class Signal(BridgeH5):
Signal assumes that the metadata and data are accessible to perform time-adjustments and apply previously recorded post-processes. Signal assumes that the metadata and data are accessible to perform time-adjustments and apply previously recorded post-processes.
""" """
def __init__(self, file: t.Union[str, PosixPath]): def __init__(self, file: t.Union[str, Path]):
"""Define index_names for dataframes, candidate fluorescence channels, and composite statistics.""" """Define index_names for dataframes, candidate fluorescence channels, and composite statistics."""
super().__init__(file, flag=None) super().__init__(file, flag=None)
self.index_names = ( self.index_names = (
...@@ -47,20 +47,25 @@ class Signal(BridgeH5): ...@@ -47,20 +47,25 @@ class Signal(BridgeH5):
def __getitem__(self, dsets: t.Union[str, t.Collection]): def __getitem__(self, dsets: t.Union[str, t.Collection]):
"""Get and potentially pre-process data from h5 file and return as a dataframe.""" """Get and potentially pre-process data from h5 file and return as a dataframe."""
if isinstance(dsets, str): # no pre-processing if isinstance(dsets, str): # no pre-processing
df = self.get_raw(dsets) return self.get(dsets)
return self.add_name(df, dsets)
elif isinstance(dsets, list): # pre-processing elif isinstance(dsets, list): # pre-processing
is_bgd = [dset.endswith("imBackground") for dset in dsets] is_bgd = [dset.endswith("imBackground") for dset in dsets]
# Check we are not comparing tile-indexed and cell-indexed data # Check we are not comparing tile-indexed and cell-indexed data
assert sum(is_bgd) == 0 or sum(is_bgd) == len( assert sum(is_bgd) == 0 or sum(is_bgd) == len(
dsets dsets
), "Tile data and cell data can't be mixed" ), "Tile data and cell data can't be mixed"
return [ return [self.get(dset) for dset in dsets]
self.add_name(self.apply_prepost(dset), dset) for dset in dsets
]
else: else:
raise Exception(f"Invalid type {type(dsets)} to get datasets") raise Exception(f"Invalid type {type(dsets)} to get datasets")
def get(self, dsets: t.Union[str, t.Collection], **kwargs):
"""Get and potentially pre-process data from h5 file and return as a dataframe."""
if isinstance(dsets, str): # no pre-processing
df = self.get_raw(dsets, **kwargs)
prepost_applied = self.apply_prepost(dsets, **kwargs)
return self.add_name(prepost_applied, dsets)
@staticmethod @staticmethod
def add_name(df, name): def add_name(df, name):
"""Add column of identical strings to a dataframe.""" """Add column of identical strings to a dataframe."""
...@@ -129,18 +134,24 @@ class Signal(BridgeH5): ...@@ -129,18 +134,24 @@ class Signal(BridgeH5):
Returns an array with three columns: the tile id, the mother label, and the daughter label. Returns an array with three columns: the tile id, the mother label, and the daughter label.
""" """
if lineage_location is None: if lineage_location is None:
lineage_location = "postprocessing/lineage" lineage_location = "modifiers/lineage_merged"
if merged:
lineage_location += "_merged"
with h5py.File(self.filename, "r") as f: with h5py.File(self.filename, "r") as f:
# if lineage_location not in f:
# lineage_location = lineage_location.split("_")[0]
if lineage_location not in f:
lineage_location = "postprocessing/lineage"
tile_mo_da = f[lineage_location] tile_mo_da = f[lineage_location]
lineage = np.array(
( if isinstance(tile_mo_da, h5py.Dataset):
tile_mo_da["trap"], lineage = tile_mo_da[()]
tile_mo_da["mother_label"], else:
tile_mo_da["daughter_label"], lineage = np.array(
) (
).T tile_mo_da["trap"],
tile_mo_da["mother_label"],
tile_mo_da["daughter_label"],
)
).T
return lineage return lineage
@_first_arg_str_to_df @_first_arg_str_to_df
...@@ -171,7 +182,7 @@ class Signal(BridgeH5): ...@@ -171,7 +182,7 @@ class Signal(BridgeH5):
""" """
if isinstance(merges, bool): if isinstance(merges, bool):
merges: np.ndarray = self.get_merges() if merges else np.array([]) merges: np.ndarray = self.load_merges() if merges else np.array([])
if merges.any(): if merges.any():
merged = apply_merges(data, merges) merged = apply_merges(data, merges)
else: else:
...@@ -292,7 +303,7 @@ class Signal(BridgeH5): ...@@ -292,7 +303,7 @@ class Signal(BridgeH5):
self._log(f"Could not fetch dataset {dataset}: {e}", "error") self._log(f"Could not fetch dataset {dataset}: {e}", "error")
raise e raise e
def get_merges(self): def load_merges(self):
"""Get merge events going up to the first level.""" """Get merge events going up to the first level."""
with h5py.File(self.filename, "r") as f: with h5py.File(self.filename, "r") as f:
merges = f.get("modifiers/merges", np.array([])) merges = f.get("modifiers/merges", np.array([]))
...@@ -309,7 +320,9 @@ class Signal(BridgeH5): ...@@ -309,7 +320,9 @@ class Signal(BridgeH5):
with h5py.File(self.filename, "r") as f: with h5py.File(self.filename, "r") as f:
picks = set() picks = set()
if path in f: if path in f:
picks = set(zip(*[f[path + name] for name in names])) picks = set(
zip(*[f[path + name] for name in names if name in f[path]])
)
return picks return picks
def dataset_to_df(self, f: h5py.File, path: str) -> pd.DataFrame: def dataset_to_df(self, f: h5py.File, path: str) -> pd.DataFrame:
......
...@@ -172,7 +172,6 @@ class DynamicWriter: ...@@ -172,7 +172,6 @@ class DynamicWriter:
# append or create new dataset # append or create new dataset
self._append(value, key, hgroup) self._append(value, key, hgroup)
except Exception as e: except Exception as e:
print(key, value)
self._log( self._log(
f"{key}:{value} could not be written: {e}", "error" f"{key}:{value} could not be written: {e}", "error"
) )
...@@ -550,7 +549,7 @@ class Writer(BridgeH5): ...@@ -550,7 +549,7 @@ class Writer(BridgeH5):
compression=kwargs.get("compression", None), compression=kwargs.get("compression", None),
) )
dset = f[values_path] dset = f[values_path]
dset[()] = df.values dset[()] = df.values.astype("float16")
# create dateset and write indices # create dateset and write indices
if not len(df): # Only write more if not empty if not len(df): # Only write more if not empty
...@@ -567,21 +566,18 @@ class Writer(BridgeH5): ...@@ -567,21 +566,18 @@ class Writer(BridgeH5):
) )
dset = f[indices_path] dset = f[indices_path]
dset[()] = df.index.get_level_values(level=name).tolist() dset[()] = df.index.get_level_values(level=name).tolist()
# create dataset and write columns
if ( # create dataset and write time points as columns
df.columns.dtype == int
or df.columns.dtype == np.dtype("uint")
or df.columns.name == "timepoint"
):
tp_path = path + "/timepoint" tp_path = path + "/timepoint"
f.create_dataset( if tp_path not in f:
name=tp_path, f.create_dataset(
shape=(df.shape[1],), name=tp_path,
maxshape=(max_tps,), shape=(df.shape[1],),
dtype="uint16", maxshape=(max_tps,),
) dtype="uint16",
tps = list(range(df.shape[1])) )
f[tp_path][tps] = tps tps = list(range(df.shape[1]))
f[tp_path][tps] = tps
else: else:
f[path].attrs["columns"] = df.columns.tolist() f[path].attrs["columns"] = df.columns.tolist()
else: else:
......
#!/usr/bin/env jupyter #!/usr/bin/env jupyter
"""
Utilities based on association are used to efficiently acquire indices of tracklets with some kind of relationship.
This can be:
- Cells that are to be merged
- Cells that have a linear relationship
"""
import numpy as np
import typing as t
def validate_association(
association: np.ndarray,
indices: np.ndarray,
match_column: t.Optional[int] = None,
) -> t.Tuple[np.ndarray, np.ndarray]:
"""Select rows from the first array that are present in both.
We use casting for fast multiindexing, generalising for lineage dynamics
Parameters
----------
association : np.ndarray
2-D array where columns are (trap, mother, daughter) or 3-D array where
dimensions are (X,trap,2), containing tuples ((trap,mother), (trap,daughter))
across the 3rd dimension.
indices : np.ndarray
2-D array where each column is a different level. This should not include mother_label.
match_column: int
int indicating a specific column is required to match (i.e.
0-1 for target-source when trying to merge tracklets or mother-bud for lineage)
must be present in indices. If it is false one match suffices for the resultant indices
vector to be True.
Returns
-------
np.ndarray
1-D boolean array indicating valid merge events.
np.ndarray
1-D boolean array indicating indices with an association relationship.
Examples
--------
>>> import numpy as np
>>> from agora.utils.association import validate_association
>>> merges = np.array(range(12)).reshape(3,2,2)
>>> indices = np.array(range(6)).reshape(3,2)
>>> print(merges, indices)
>>> print(merges); print(indices)
[[[ 0 1]
[ 2 3]]
[[ 4 5]
[ 6 7]]
[[ 8 9]
[10 11]]]
[[0 1]
[2 3]
[4 5]]
>>> valid_associations, valid_indices = validate_association(merges, indices)
>>> print(valid_associations, valid_indices)
[ True False False] [ True True False]
"""
if association.ndim == 2:
# Reshape into 3-D array for broadcasting if neded
# association = np.stack(
# (association[:, [0, 1]], association[:, [0, 2]]), axis=1
# )
association = last_col_as_rows(association)
# Compare existing association with available indices
# Swap trap and label axes for the association array to correctly cast
valid_ndassociation = association[..., None] == indices.T[None, ...]
# Broadcasting is confusing (but efficient):
# First we check the dimension across trap and cell id, to ensure both match
valid_cell_ids = valid_ndassociation.all(axis=2)
if match_column is None:
# Then we check the merge tuples to check which cases have both target and source
valid_association = valid_cell_ids.any(axis=2).all(axis=1)
# Finally we check the dimension that crosses all indices, to ensure the pair
# is present in a valid merge event.
valid_indices = (
valid_ndassociation[valid_association].all(axis=2).any(axis=(0, 1))
)
else: # We fetch specific indices if we aim for the ones with one present
valid_indices = valid_cell_ids[:, match_column].any(axis=0)
# Valid association then becomes a boolean array, true means that there is a
# match (match_column) between that cell and the index
valid_association = (
valid_cell_ids[:, match_column] & valid_indices
).any(axis=1)
return valid_association, valid_indices
def last_col_as_rows(ndarray: np.ndarray):
"""
Convert the last column to a new row while repeating all previous indices.
This is useful when converting a signal multiindex before comparing association.
"""
columns = np.arange(ndarray.shape[1])
return np.stack(
(
ndarray[:, np.delete(columns, -1)],
ndarray[:, np.delete(columns, -2)],
),
axis=1,
)
...@@ -9,7 +9,7 @@ def _str_to_int(x: str or None): ...@@ -9,7 +9,7 @@ def _str_to_int(x: str or None):
""" """
Cast string as int if possible. If Nonetype return None. Cast string as int if possible. If Nonetype return None.
""" """
if x: if x is not None:
try: try:
return int(x) return int(x)
except: except:
......
#!/usr/bin/env jupyter
"""
Utilities based on association are used to efficiently acquire indices of tracklets with some kind of relationship.
This can be:
- Cells that are to be merged
- Cells that have a linear relationship
"""
import numpy as np
import typing as t
def validate_association(
association: np.ndarray,
indices: np.ndarray,
match_column: t.Optional[int] = None,
) -> t.Tuple[np.ndarray, np.ndarray]:
"""Select rows from the first array that are present in both.
We use casting for fast multiindexing, generalising for lineage dynamics
Parameters
----------
association : np.ndarray
2-D array where columns are (trap, mother, daughter) or 3-D array where
dimensions are (X,trap,2), containing tuples ((trap,mother), (trap,daughter))
across the 3rd dimension.
indices : np.ndarray
2-D array where each column is a different level. This should not include mother_label.
match_column: int
int indicating a specific column is required to match (i.e.
0-1 for target-source when trying to merge tracklets or mother-bud for lineage)
must be present in indices. If it is false one match suffices for the resultant indices
vector to be True.
Returns
-------
np.ndarray
1-D boolean array indicating valid merge events.
np.ndarray
1-D boolean array indicating indices with an association relationship.
Examples
--------
>>> import numpy as np
>>> from agora.utils.indexing import validate_association
>>> merges = np.array(range(12)).reshape(3,2,2)
>>> indices = np.array(range(6)).reshape(3,2)
>>> print(merges, indices)
>>> print(merges); print(indices)
[[[ 0 1]
[ 2 3]]
[[ 4 5]
[ 6 7]]
[[ 8 9]
[10 11]]]
[[0 1]
[2 3]
[4 5]]
>>> valid_associations, valid_indices = validate_association(merges, indices)
>>> print(valid_associations, valid_indices)
[ True False False] [ True True False]
"""
if association.ndim == 2:
# Reshape into 3-D array for broadcasting if neded
# association = np.stack(
# (association[:, [0, 1]], association[:, [0, 2]]), axis=1
# )
association = _assoc_indices_to_3d(association)
# Compare existing association with available indices
# Swap trap and label axes for the association array to correctly cast
valid_ndassociation = association[..., None] == indices.T[None, ...]
# Broadcasting is confusing (but efficient):
# First we check the dimension across trap and cell id, to ensure both match
valid_cell_ids = valid_ndassociation.all(axis=2)
if match_column is None:
# Then we check the merge tuples to check which cases have both target and source
valid_association = valid_cell_ids.any(axis=2).all(axis=1)
# Finally we check the dimension that crosses all indices, to ensure the pair
# is present in a valid merge event.
valid_indices = (
valid_ndassociation[valid_association].all(axis=2).any(axis=(0, 1))
)
else: # We fetch specific indices if we aim for the ones with one present
valid_indices = valid_cell_ids[:, match_column].any(axis=0)
# Valid association then becomes a boolean array, true means that there is a
# match (match_column) between that cell and the index
valid_association = (
valid_cell_ids[:, match_column] & valid_indices
).any(axis=1)
return valid_association, valid_indices
def _assoc_indices_to_3d(ndarray: np.ndarray):
"""
Convert the last column to a new row while repeating all previous indices.
This is useful when converting a signal multiindex before comparing association.
Assumes the input array has shape (N,3)
"""
result = ndarray
if len(ndarray) and ndarray.ndim > 1:
if ndarray.shape[1] == 3: # Faster indexing for single positions
result = np.transpose(
np.hstack((ndarray[:, [0]], ndarray)).reshape(-1, 2, 2),
axes=[0, 2, 1],
)
else: # 20% slower but more general indexing
columns = np.arange(ndarray.shape[1])
result = np.stack(
(
ndarray[:, np.delete(columns, -1)],
ndarray[:, np.delete(columns, -2)],
),
axis=1,
)
return result
def _3d_index_to_2d(array: np.ndarray):
"""
Opposite to _assoc_indices_to_3d.
"""
result = array
if len(array):
result = np.concatenate(
(array[:, 0, :], array[:, 1, 1, np.newaxis]), axis=1
)
return result
def compare_indices(x: np.ndarray, y: np.ndarray) -> np.ndarray:
"""
Fetch two 2-D indices and return a binary 2-D matrix
where a True value links two cells where all cells are the same
"""
return (x[..., None] == y.T[None, ...]).all(axis=1)
...@@ -6,6 +6,8 @@ import numpy as np ...@@ -6,6 +6,8 @@ import numpy as np
import pandas as pd import pandas as pd
from sklearn.cluster import KMeans from sklearn.cluster import KMeans
from agora.utils.indexing import validate_association
index_row = t.Tuple[str, str, int, int] index_row = t.Tuple[str, str, int, int]
...@@ -120,7 +122,9 @@ def bidirectional_retainment_filter( ...@@ -120,7 +122,9 @@ def bidirectional_retainment_filter(
def melt_reset(df: pd.DataFrame, additional_ids: t.Dict[str, pd.Series] = {}): def melt_reset(df: pd.DataFrame, additional_ids: t.Dict[str, pd.Series] = {}):
new_df = add_index_levels(df, additional_ids) new_df = add_index_levels(df, additional_ids)
return new_df.melt(ignore_index=False).reset_index() return new_df.melt(
ignore_index=False, var_name="time (minutes)", value_name="signal"
).reset_index()
# Drop cells that if used would reduce info the most # Drop cells that if used would reduce info the most
...@@ -175,3 +179,67 @@ def drop_mother_label(index: pd.MultiIndex) -> np.ndarray: ...@@ -175,3 +179,67 @@ def drop_mother_label(index: pd.MultiIndex) -> np.ndarray:
def get_index_as_np(signal: pd.DataFrame): def get_index_as_np(signal: pd.DataFrame):
# Get mother labels from multiindex dataframe # Get mother labels from multiindex dataframe
return np.array(signal.index.to_list()) return np.array(signal.index.to_list())
def standard_filtering(
raw: pd.DataFrame,
lin: np.ndarray,
presence_high: float = 0.8,
presence_low: int = 7,
):
# Get all mothers
_, valid_indices = validate_association(
lin, np.array(raw.index.to_list()), match_column=0
)
in_lineage = raw.loc[valid_indices]
# Filter mothers by presence
present = in_lineage.loc[
in_lineage.notna().sum(axis=1) > (in_lineage.shape[1] * presence_high)
]
# Get indices
indices = np.array(present.index.to_list())
to_cast = np.stack((lin[:, :2], lin[:, [0, 2]]), axis=1)
ndin = to_cast[..., None] == indices.T[None, ...]
# use indices to fetch all daughters
valid_association = ndin.all(axis=2)[:, 0].any(axis=-1)
# Remove repeats
mothers, daughters = np.split(to_cast[valid_association], 2, axis=1)
mothers = mothers[:, 0]
daughters = daughters[:, 0]
d_m_dict = {tuple(d): m[-1] for m, d in zip(mothers, daughters)}
# assuming unique sorts
raw_mothers = raw.loc[_as_tuples(mothers)]
raw_mothers["mother_label"] = 0
raw_daughters = raw.loc[_as_tuples(daughters)]
raw_daughters["mother_label"] = d_m_dict.values()
concat = pd.concat((raw_mothers, raw_daughters)).sort_index()
concat.set_index("mother_label", append=True, inplace=True)
# Last filter to remove tracklets that are too short
removed_buds = concat.notna().sum(axis=1) <= presence_low
filt = concat.loc[~removed_buds]
# We check that no mothers are left child-less
m_d_dict = {tuple(m): [] for m in mothers}
for (trap, d), m in d_m_dict.items():
m_d_dict[(trap, m)].append(d)
for trap, daughter, mother in concat.index[removed_buds]:
idx_to_delete = m_d_dict[(trap, mother)].index(daughter)
del m_d_dict[(trap, mother)][idx_to_delete]
bud_free = []
for m, d in m_d_dict.items():
if not d:
bud_free.append(m)
final_result = filt.drop(bud_free)
# In the end, we get the mothers present for more than {presence_lineage1}% of the experiment
# and their tracklets present for more than {presence_lineage2} time-points
return final_result
...@@ -9,7 +9,7 @@ import numpy as np ...@@ -9,7 +9,7 @@ import numpy as np
import pandas as pd import pandas as pd
from utils_find_1st import cmp_larger, find_1st from utils_find_1st import cmp_larger, find_1st
from agora.utils.association import validate_association from agora.utils.indexing import compare_indices, validate_association
def apply_merges(data: pd.DataFrame, merges: np.ndarray): def apply_merges(data: pd.DataFrame, merges: np.ndarray):
...@@ -31,23 +31,29 @@ def apply_merges(data: pd.DataFrame, merges: np.ndarray): ...@@ -31,23 +31,29 @@ def apply_merges(data: pd.DataFrame, merges: np.ndarray):
""" """
indices = data.index
if "mother_label" in indices.names:
indices = indices.droplevel("mother_label")
valid_merges, indices = validate_association( valid_merges, indices = validate_association(
merges, np.array(list(data.index)) merges, np.array(list(indices))
) )
# Assign non-merged # Assign non-merged
merged = data.loc[~indices] merged = data.loc[~indices]
# Implement the merges and drop source rows. # Implement the merges and drop source rows.
# TODO Use matrices to perform merges in batch
# for ecficiency
if valid_merges.any(): if valid_merges.any():
to_merge = data.loc[indices] to_merge = data.loc[indices]
for target, source in merges[valid_merges]: targets, sources = zip(*merges[valid_merges])
target, source = tuple(target), tuple(source) for source, target in zip(sources, targets):
target = tuple(target)
to_merge.loc[target] = join_tracks_pair( to_merge.loc[target] = join_tracks_pair(
to_merge.loc[target].values, to_merge.loc[target].values,
to_merge.loc[source].values, to_merge.loc[tuple(source)].values,
) )
to_merge.drop(source, inplace=True) to_merge.drop(map(tuple, sources), inplace=True)
merged = pd.concat((merged, to_merge), names=data.index.names) merged = pd.concat((merged, to_merge), names=data.index.names)
return merged return merged
...@@ -57,7 +63,84 @@ def join_tracks_pair(target: np.ndarray, source: np.ndarray) -> np.ndarray: ...@@ -57,7 +63,84 @@ def join_tracks_pair(target: np.ndarray, source: np.ndarray) -> np.ndarray:
""" """
Join two tracks and return the new value of the target. Join two tracks and return the new value of the target.
""" """
target_copy = copy(target) target_copy = target
end = find_1st(target_copy[::-1], 0, cmp_larger) end = find_1st(target_copy[::-1], 0, cmp_larger)
target_copy[-end:] = source[-end:] target_copy[-end:] = source[-end:]
return target_copy return target_copy
def group_merges(merges: np.ndarray) -> t.List[t.Tuple]:
# Return a list where the cell is present as source and target
# (multimerges)
sources_targets = compare_indices(merges[:, 0, :], merges[:, 1, :])
is_multimerge = sources_targets.any(axis=0) | sources_targets.any(axis=1)
is_monomerge = ~is_multimerge
multimerge_subsets = union_find(zip(*np.where(sources_targets)))
merge_groups = [merges[np.array(tuple(x))] for x in multimerge_subsets]
sorted_merges = list(map(sort_association, merge_groups))
# Ensure that source and target are at the edges
return [
*sorted_merges,
*[[event] for event in merges[is_monomerge]],
]
def union_find(lsts):
sets = [set(lst) for lst in lsts if lst]
merged = True
while merged:
merged = False
results = []
while sets:
common, rest = sets[0], sets[1:]
sets = []
for x in rest:
if x.isdisjoint(common):
sets.append(x)
else:
merged = True
common |= x
results.append(common)
sets = results
return sets
def sort_association(array: np.ndarray):
# Sort the internal associations
order = np.where(
(array[:, 0, ..., None] == array[:, 1].T[None, ...]).all(axis=1)
)
res = []
[res.append(x) for x in np.flip(order).flatten() if x not in res]
sorted_array = array[np.array(res)]
return sorted_array
def merge_association(
association: np.ndarray, merges: np.ndarray
) -> np.ndarray:
grouped_merges = group_merges(merges)
flat_indices = association.reshape(-1, 2)
comparison_mat = compare_indices(merges[:, 0], flat_indices)
valid_indices = comparison_mat.any(axis=0)
if valid_indices.any(): # Where valid, perform transformation
replacement_d = {}
for dataset in grouped_merges:
for k in dataset:
replacement_d[tuple(k[0])] = dataset[-1][1]
flat_indices[valid_indices] = [
replacement_d[tuple(i)] for i in flat_indices[valid_indices]
]
merged_indices = flat_indices.reshape(-1, 2, 2)
return merged_indices
...@@ -6,7 +6,7 @@ import logging ...@@ -6,7 +6,7 @@ import logging
import re import re
import time import time
import typing as t import typing as t
from pathlib import Path, PosixPath from pathlib import Path
from time import perf_counter from time import perf_counter
import baby.errors import baby.errors
...@@ -108,9 +108,7 @@ class BabyParameters(ParametersABC): ...@@ -108,9 +108,7 @@ class BabyParameters(ParametersABC):
tf_version=2, tf_version=2,
) )
def update_baby_modelset( def update_baby_modelset(self, path: t.Union[str, Path, t.Dict[str, str]]):
self, path: t.Union[str, PosixPath, t.Dict[str, str]]
):
""" """
Replace default BABY model and flattener with another one from a folder outputted Replace default BABY model and flattener with another one from a folder outputted
by our standard retraining script. by our standard retraining script.
...@@ -141,6 +139,14 @@ class BabyRunner(StepABC): ...@@ -141,6 +139,14 @@ class BabyRunner(StepABC):
if parameters is None if parameters is None
else parameters.model_config else parameters.model_config
) )
tiler_z = self.tiler.shape[-3]
model_name = self.model_config["flattener_file"]
if tiler_z != 5:
assert (
f"{tiler_z}z" in model_name
), f"Tiler z-stack ({tiler_z}) and Model shape ({model_name}) do not match "
self.brain = BabyBrain(**self.model_config) self.brain = BabyBrain(**self.model_config)
self.crawler = BabyCrawler(self.brain) self.crawler = BabyCrawler(self.brain)
self.bf_channel = self.tiler.ref_channel_index self.bf_channel = self.tiler.ref_channel_index
......