Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • swain-lab/aliby/aliby-mirror
  • swain-lab/aliby/alibylite
2 results
Show changes
Commits on Source (162)
Showing
with 164 additions and 1237 deletions
# ALIBY (Analyser of Live-cell Imaging for Budding Yeast)
# ALIBYlite (Analyser of Live-cell Imaging for Budding Yeast)
[![docs](https://readthedocs.org/projects/aliby/badge/?version=master)](https://aliby.readthedocs.io/en/latest)
[![PyPI version](https://badge.fury.io/py/aliby.svg)](https://badge.fury.io/py/aliby)
[![pipeline](https://gitlab.com/aliby/aliby/badges/master/pipeline.svg?key_text=master)](https://gitlab.com/aliby/aliby/-/pipelines)
[![dev pipeline](https://gitlab.com/aliby/aliby/badges/dev/pipeline.svg?key_text=dev)](https://gitlab.com/aliby/aliby/-/commits/dev)
[![coverage](https://gitlab.com/aliby/aliby/badges/dev/coverage.svg)](https://gitlab.com/aliby/aliby/-/commits/dev)
End-to-end processing of cell microscopy time-lapses. ALIBY automates segmentation, tracking, lineage predictions and post-processing.
End-to-end processing of cell microscopy time-lapses. ALIBY automates segmentation, tracking, lineage predictions, post-processing and report production. It leverages the existing Python ecosystem and open-source scientific software available to produce seamless and standardised pipelines.
## Installation
## Quickstart Documentation
Installation of [VS Studio](https://visualstudio.microsoft.com/downloads/#microsoft-visual-c-redistributable-for-visual-studio-2022) Native MacOS support for is under work, but you can use containers (e.g., Docker, Podman) in the meantime.
We recommend installing both ALIBY and WELA.
To analyse local data
```bash
pip install aliby
```
Add any of the optional flags `omero` and `utils` (e.g., `pip install aliby[omero, utils]`). `omero` provides tools to connect with an OMERO server and `utils` provides visualisation, user interface and additional deep learning tools.
See our [installation instructions]( https://aliby.readthedocs.io/en/latest/INSTALL.html ) for more details.
### CLI
To begin you should install [miniconda](https://docs.anaconda.com/free/miniconda/index.html) and [poetry](https://python-poetry.org).
If installed via poetry, you have access to a Command Line Interface (CLI)
```bash
aliby-run --expt_id EXPT_PATH --distributed 4 --tps None
```
Once poetry is installed, we suggest running
And to run Omero servers, the basic arguments are shown:
```bash
aliby-run --expt_id XXX --host SERVER.ADDRESS --user USER --password PASSWORD
```bash
poetry config virtualenvs.create false
```
The output is a folder with the original logfiles and a set of hdf5 files, one with the results of each multidimensional inside.
For more information, including available options, see the page on [running the analysis pipeline](https://aliby.readthedocs.io/en/latest/PIPELINE.html)
so that only conda creates virtual environments.
## Using specific components
Then
### Access raw data
- Create and activate an alibylite virtual environment
ALIBY's tooling can also be used as an interface to OMERO servers, for example, to fetch a brightfield channel.
```python
from aliby.io.omero import Dataset, Image
server_info= {
"host": "host_address",
"username": "user",
"password": "xxxxxx"}
expt_id = XXXX
tps = [0, 1] # Subset of positions to get.
with Dataset(expt_id, **server_info) as conn:
image_ids = conn.get_images()
#To get the first position
with Image(list(image_ids.values())[0], **server_info) as image:
dimg = image.data
imgs = dimg[tps, image.metadata["channels"].index("Brightfield"), 2, ...].compute()
# tps timepoints, Brightfield channel, z=2, all x,y
```
```bash
conda create -n alibylite python=3.10
conda activate alibylite
```
### Tiling the raw data
- Git clone alibylite, change to the alibylite directory with the poetry.lock file, and use poetry to install:
A `Tiler` object performs trap registration. It may be built in different ways but the simplest one is using an image and a the default parameters set.
```bash
poetry install
```
```python
from aliby.tile.tiler import Tiler, TilerParameters
with Image(list(image_ids.values())[0], **server_info) as image:
tiler = Tiler.from_image(image, TilerParameters.default())
tiler.run_tp(0)
```
- Git clone wela, change to the wela directory with the poetry.lock file, and use poetry to install:
The initialisation should take a few seconds, as it needs to align the images
in time.
```bash
poetry install
```
It fetches the metadata from the Image object, and uses the TilerParameters values (all Processes in aliby depend on an associated Parameters class, which is in essence a dictionary turned into a class.)
- Use pip to install your usual Python working environment. For example:
#### Get a timelapse for a given tile (remote connection)
```python
fpath = "h5/location"
```bash
pip install ipython seaborn
```
tile_id = 9
trange = range(0, 10)
ncols = 8
- Install omero-py.
riv = remoteImageViewer(fpath)
trap_tps = [riv.tiler.get_tiles_timepoint(tile_id, t) for t in trange]
For a Mac, use:
# You can also access labelled traps
m_ts = riv.get_labelled_trap(tile_id=0, tps=[0])
```bash
conda install -c conda-forge zeroc-ice==3.6.5
conda install omero-py
```
# And plot them directly
riv.plot_labelled_trap(trap_id=0, channels=[0, 1, 2, 3], trange=range(10))
```
For everything else, use:
Depending on the network speed can take several seconds at the moment.
For a speed-up: take fewer z-positions if you can.
```bash
poetry install --all-extras
```
#### Get the tiles for a given time point
Alternatively, if you want to get all the traps at a given timepoint:
- You may have an issue with Matplotlib crashing.
Use conda to install a different version:
```python
timepoint = (4,6)
tiler.get_tiles_timepoint(timepoint, channels=None,
z=[0,1,2,3,4])
```
```bash
conda search -f matplotlib
```
and, for example,
### Contributing
See [CONTRIBUTING](https://aliby.readthedocs.io/en/latest/INSTALL.html) on how to help out or get involved.
```bash
conda install matplotlib=3.8.0
```
#+title: aliby
The microscope visits multiple positions during an experiment. Each position may have a different setup or strain. We denote this strain as a *group*. For every position, we take an image for every time point.
We divide all images into *tiles*, one per trap. Baby determines masks, mother-bud pairs, and tracking for each tile. We obtain data on individual cells first for each tile and then for each position for all time points: *cells* and *signal* provide this information for a position; *grouper* concatenates over all positions.
All global parameters, such as possible fluorescence channels and pixel sizes, are stored in *global_parameters*.
* aliby/pipeline
Runs the *tiler*, *baby*, and *extraction* steps of the pipeline, and then *postprocessing*.
The *run* function loops through positions, calling *run_one_position*, which loops through time points.
For each time point, each step of the pipeline has a *_run_tp* function, which StepABC renames to *run_tp*, to process one time point for a position.
Extractor does not have an independent writer, but writes to the h5 file in *_run_tp*.
* aliby/tile/tiler
Tiles image into smaller regions of interest or tiles, one per trap, for faster processing. We ignore tiles without cells.
* aliby/baby_sitter
Interfaces with Baby through the *BabyRunner* class, which returns a dict of Baby's results.
* extraction/core/extractor
Extracts areas and volumes and the fluorescence data from the images for the cells in one position, via the image tiles, using the cell masks found by Baby.
We save the cell properties we wish to extract as a nested dictionary, such as
{'general': {'None': ['area', 'volume', 'eccentricity']}}.
*extract_tp* extracts data for one time point.
** extraction/core/functions/cell
Defines the standard functions, such as area and median, that we apply to pixels from individual cells.
** extraction/core/functions/trap
Determines properties of a tile's background.
** extraction/core/functions/distributors
Collapses multiple z-sections to a 2D image.
** extraction/core/functions/defaults
Defines the standard fluorescence signals and metrics, like median, we extract in *exparams_from_meta*.
** extraction/core/function/custom/localisation
Defines more complex functions to apply to cells, such as *nuc_est_conv*, which estimates nuclear localisation of a fluorescent protein.
* agora/bridge
Interfaces with h5 files.
* agora/cells
Accesses information on cells and masks in tiles from an h5 file.
* agora/signal
Gets extracted properties, such as median fluorescence, for all cells and all time points from an h5 file - data for one position.
Signal applies picking and merging of cells using the choices made by *picker* and *merger*. *get_raw* gets the data from the h5 file without any picking and merging.
* postprocessor/core/processor
For one position, the *run* function performs picking, of appropriate cells, and merging, of tracklets, via *run_prepost* and then runs processes, such as the *buddings* and *bud_metric* functions, on signals, such as *volume*, to get new signals, such as *buddings* and *bud_volume*.
*run_process* writes the results to an h5 file.
The class *PostProcessorParameters* lists the standard processes we perform, such as running *buddings* and *bud_metric* on *area*.
* postprocessor/core/reshapers/picker
Selects cells from a Signal for which there is lineage information and by how long they remain in the experiment, writing the choices to the h5 file.
* postprocessor/core/reshapers/merger
Combines tracks that should be a single track of the same cell, writing the choices to the h5 file.
* agora/utils/indexing
Core code needed when *picker* uses Baby's lineage information to select mother-bud pairs in a Signal.
* postprocessor/grouper
*concat_signal*: Concatenates signals from different h5 files - we have one per position - to generate dataframes for the entire experiment.
uses either *concat_signal_ind* for independent signals or *concat_standard*.
* aliby/utils/argo
Gets information on the data available in an OMERO data base.
* aliby/io/omero
Contains functions to interact with OMERO and extract information on an *Image* corresponding to an OMERO image ID or a *Dataset* corresponding to an OMERO experiment ID.
* Language
We use *tile* and *trap* interchangeably, but *tile* is preferred.
We use *bud* and *daughter* interchangeably, but *bud* is preferred.
We use *record* and *kymograph* interchangeably, but *record* is preferred.
examples/extraction/pairs_data/pos010_trap001_tp0001_GFP.png

49.1 KiB

examples/extraction/pairs_data/pos010_trap050_tp0001_GFP.png

49.1 KiB

examples/extraction/pairs_data/pos010_trap081_tp0001_GFP.png

50.1 KiB

from extraction.core.extractor import Extractor
from extraction.core.parameters import Parameters
params = Parameters(
tree={
"general": {"None": ["area"]},
"GFPFast": {"np_max": ["mean", "median", "imBackground"]},
"pHluorin405": {"np_max": ["mean", "median", "imBackground"]},
"mCherry": {
"np_max": ["mean", "median", "imBackground", "max5px", "max2p5pc"]
},
}
)
ext = Extractor(params, omero_id=19310)
d = ext.extract_tp(tp=1, tile_size=117)
import matplotlib.pyplot as plt
from core.experiment import Experiment
from core.segment import Tiler
expt = Experiment.from_source(
19310, # Experiment ID on OMERO
"upload", # OMERO Username
"***REMOVED***", # OMERO Password
"islay.bio.ed.ac.uk", # OMERO host
port=4064, # This is default
)
# Load whole position
img = expt[0, 0, :, :, 2]
plt.imshow(img[0, 0, ..., 0])
plt.show()
# Manually get template
tilesize = 117
x0 = 827
y0 = 632
trap_template = img[0, 0, x0 : x0 + tilesize, y0 : y0 + tilesize, 0]
plt.imshow(trap_template)
plt.show()
tiler = Tiler(expt, template=trap_template)
# Load images (takes about 5 mins)
trap_tps = tiler.get_tiles_timepoint(0, tile_size=117, z=[2])
# Plot found traps
nrows, ncols = (5, 5)
fig, axes = plt.subplots(nrows, ncols)
for i in range(nrows):
for j in range(ncols):
if i * nrows + j < trap_tps.shape[0]:
axes[i, j].imshow(trap_tps[i * nrows + j, 0, 0, ..., 0])
plt.show()
=====================
06-Jan-2020 18:30:59 Start creating new experiment using parameters:
Omero experiment name: 001
Temporary working directory: C:06-Jan-2020 18:30:59 Processing position 2 (1108_002)
06-Jan-2020 18:31:00 Processing position 3 (1108_003)
06-Jan-2020 18:31:01 Processing position 4 (1109_004)
06-Jan-2020 18:31:02 Processing position 5 (1109_005)
06-Jan-2020 18:31:04 Processing position 6 (1109_006)
06-Jan-2020 18:31:05 Processing position 7 (1110_007)
06-Jan-2020 18:31:06 Processing position 8 (1110_008)
06-Jan-2020 18:31:07 Processing position 9 (1110_009)
06-Jan-2020 18:31:10 Successfully completed creating new experiment in 11 secs.
---------------------
=====================
06-Jan-2020 18:31:33 Start selecting traps...
06-Jan-2020 18:31:33 Processing position 1 (1108_001)
06-Jan-2020 18:31:40 Remove trap at 550 1188
06-Jan-2020 18:31:40 Remove trap at 733 1179
06-Jan-2020 18:31:41 Remove trap at 384 1189
06-Jan-2020 18:31:42 Remove trap at 201 1186
06-Jan-2020 18:31:47 Processing position 2 (1108_002)
06-Jan-2020 18:31:52 Remove trap at 384 1060
06-Jan-2020 18:31:54 Remove trap at 1081 571
06-Jan-2020 18:32:01 Processing position 3 (1108_003)
06-Jan-2020 18:32:05 Remove trap at 948 1140
06-Jan-2020 18:32:06 Remove trap at 1141 1174
06-Jan-2020 18:32:17 Remove trap at 139 1111
06-Jan-2020 18:32:18 Add trap at 130 1138
06-Jan-2020 18:32:26 Processing position 4 (1109_004)
06-Jan-2020 18:32:32 Remove trap at 1176 388
06-Jan-2020 18:32:39 Processing position 5 (1109_005)
06-Jan-2020 18:32:44 Remove trap at 1141 1135
06-Jan-2020 18:32:51 Remove trap at 955 379
06-Jan-2020 18:32:55 Processing position 6 (1109_006)
06-Jan-2020 18:33:00 Remove trap at 676 1177
06-Jan-2020 18:33:01 Remove trap at 1111 1147
06-Jan-2020 18:33:14 Processing position 7 (1110_007)
06-Jan-2020 18:33:20 Remove trap at 46 46
06-Jan-2020 18:33:28 Remove trap at 1150 84
06-Jan-2020 18:33:34 Processing position 8 (1110_008)
06-Jan-2020 18:33:49 Processing position 9 (1110_009)
06-Jan-2020 18:33:55 Add trap at 1153 1129
06-Jan-2020 18:33:57 Remove trap at 1135 1141
06-Jan-2020 18:33:57 Remove trap at 1176 1095
06-Jan-2020 18:34:15 Successfully completed selecting traps in 2.7 mins.
---------------------
=====================
06-Jan-2020 18:34:28 Start setting extraction parameters using parameters:
extractionParameters: {
extractFunction: extractCellDataStandardParfor
functionParameters: {
type: max
channels: 2 3
nuclearMarkerChannel: NaN
maxPixOverlap: 5
maxAllowedOverlap: 25
}
}
06-Jan-2020 18:34:28 Processing position 1 (1108_001)
06-Jan-2020 18:34:28 Processing position 2 (1108_002)
06-Jan-2020 18:34:29 Processing position 3 (1108_003)
06-Jan-2020 18:34:29 Processing position 4 (1109_004)
06-Jan-2020 18:34:30 Processing position 5 (1109_005)
06-Jan-2020 18:34:30 Processing position 6 (1109_006)
06-Jan-2020 18:34:30 Processing position 7 (1110_007)
06-Jan-2020 18:34:31 Processing position 8 (1110_008)
06-Jan-2020 18:34:31 Processing position 9 (1110_009)
06-Jan-2020 18:34:33 Successfully completed setting extraction parameters in 5 secs.
---------------------
=====================
07-Jan-2020 13:17:43 Start tracking traps in time...
07-Jan-2020 13:17:43 Processing position 1 (1108_001)
07-Jan-2020 13:23:31 Processing position 2 (1108_002)
07-Jan-2020 13:29:21 Processing position 3 (1108_003)
07-Jan-2020 13:35:13 Processing position 4 (1109_004)
07-Jan-2020 13:41:19 Processing position 5 (1109_005)
07-Jan-2020 13:47:09 Processing position 6 (1109_006)
07-Jan-2020 13:52:57 Processing position 7 (1110_007)
07-Jan-2020 13:58:41 Processing position 8 (1110_008)
07-Jan-2020 14:04:41 Processing position 9 (1110_009)
07-Jan-2020 14:10:38 Successfully completed tracking traps in time in 52.9 mins.
---------------------
=====================
07-Jan-2020 14:10:38 Start baby segmentation...
07-Jan-2020 14:10:39 Processing position 1 (1108_001)
07-Jan-2020 14:14:32 cTimelapse: 210.344 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:18:30 cTimelapse: 240.345 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:22:31 cTimelapse: 272.459 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:26:32 cTimelapse: 303.876 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:30:34 cTimelapse: 336.470 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:32:32 Processing position 2 (1108_002)
07-Jan-2020 14:36:22 cTimelapse: 206.699 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:40:12 cTimelapse: 235.726 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:44:13 cTimelapse: 268.814 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:48:27 cTimelapse: 306.046 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:52:43 cTimelapse: 343.681 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:54:44 Processing position 3 (1108_003)
07-Jan-2020 14:58:47 cTimelapse: 214.895 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:02:44 cTimelapse: 247.137 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:06:47 cTimelapse: 280.902 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:10:51 cTimelapse: 314.796 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:15:13 cTimelapse: 354.774 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:17:16 Processing position 4 (1109_004)
07-Jan-2020 15:21:06 cTimelapse: 222.663 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:25:09 cTimelapse: 253.596 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:29:16 cTimelapse: 286.597 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:33:46 cTimelapse: 325.040 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:38:35 cTimelapse: 369.190 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:40:50 Processing position 5 (1109_005)
07-Jan-2020 15:45:01 cTimelapse: 235.110 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:49:23 cTimelapse: 268.760 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:53:50 cTimelapse: 304.703 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:58:15 cTimelapse: 339.861 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:02:47 cTimelapse: 377.877 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:04:53 Processing position 6 (1109_006)
07-Jan-2020 16:08:32 cTimelapse: 205.246 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:12:09 cTimelapse: 231.500 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:15:49 cTimelapse: 259.276 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:19:45 cTimelapse: 291.813 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:24:03 cTimelapse: 331.193 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:26:11 Processing position 7 (1110_007)
07-Jan-2020 16:29:30 cTimelapse: 222.990 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:32:46 cTimelapse: 238.288 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:36:03 cTimelapse: 255.524 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:39:21 cTimelapse: 275.165 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:42:40 cTimelapse: 297.244 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:44:14 Processing position 8 (1110_008)
07-Jan-2020 16:47:32 cTimelapse: 215.583 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:50:51 cTimelapse: 235.959 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:54:09 cTimelapse: 256.409 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:57:25 cTimelapse: 275.563 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 17:00:43 cTimelapse: 296.390 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 17:02:13 Processing position 9 (1110_009)
07-Jan-2020 17:05:35 cTimelapse: 225.847 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 17:08:54 cTimelapse: 245.291 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 17:12:17 cTimelapse: 266.060 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 17:15:41 cTimelapse: 288.448 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 17:19:04 cTimelapse: 311.290 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 17:20:36 Successfully completed baby segmentation in 3.2 hours.
---------------------
=====================
07-Jan-2020 17:20:37 Start tracking cells using parameters:
Tracking threshold: 10
07-Jan-2020 17:20:39 Processing position 1 (1108_001)
07-Jan-2020 17:21:23 Processing position 2 (1108_002)
07-Jan-2020 17:22:06 Processing position 3 (1108_003)
07-Jan-2020 17:22:49 Processing position 4 (1109_004)
07-Jan-2020 17:23:33 Processing position 5 (1109_005)
07-Jan-2020 17:24:18 Processing position 6 (1109_006)
07-Jan-2020 17:24:58 Processing position 7 (1110_007)
07-Jan-2020 17:25:25 Processing position 8 (1110_008)
07-Jan-2020 17:25:55 Processing position 9 (1110_009)
07-Jan-2020 17:26:25 Successfully completed tracking cells in 5.8 mins.
---------------------
=====================
07-Jan-2020 17:26:25 Start autoselecting cells using parameters:
Fraction of timelapse that cells are present for: 0.5
Number of frames a cell must be present: 540
Cell must appear by frame: 540
Cell must still be present by frame: 1
Maximum number of cells: Inf
07-Jan-2020 17:26:27 Processing position 1 (1108_001)
07-Jan-2020 17:26:42 Processing position 2 (1108_002)
07-Jan-2020 17:26:58 Processing position 3 (1108_003)
07-Jan-2020 17:27:14 Processing position 4 (1109_004)
07-Jan-2020 17:27:31 Processing position 5 (1109_005)
07-Jan-2020 17:27:48 Processing position 6 (1109_006)
07-Jan-2020 17:28:03 Processing position 7 (1110_007)
07-Jan-2020 17:28:13 Processing position 8 (1110_008)
07-Jan-2020 17:28:25 Processing position 9 (1110_009)
07-Jan-2020 17:28:36 Successfully completed autoselecting cells in 2.2 mins.
---------------------
=====================
07-Jan-2020 17:28:37 Start extracting cell information...
07-Jan-2020 17:28:39 Processing position 1 (1108_001)
07-Jan-2020 17:58:38 Processing position 2 (1108_002)
07-Jan-2020 18:28:43 Processing position 3 (1108_003)
07-Jan-2020 18:58:45 Processing position 4 (1109_004)
07-Jan-2020 19:29:03 Processing position 5 (1109_005)
07-Jan-2020 19:59:31 Processing position 6 (1109_006)
07-Jan-2020 20:29:01 Processing position 7 (1110_007)
07-Jan-2020 20:56:05 Processing position 8 (1110_008)
07-Jan-2020 21:23:53 Processing position 9 (1110_009)
07-Jan-2020 21:51:15 Successfully completed extracting cell information in 4.4 hours.
---------------------
=====================
07-Jan-2020 21:51:16 Start baby lineage extraction...
07-Jan-2020 21:51:18 Processing position 1 (1108_001)
07-Jan-2020 21:52:37 Processing position 2 (1108_002)
07-Jan-2020 21:53:57 Processing position 3 (1108_003)
07-Jan-2020 21:55:16 Processing position 4 (1109_004)
07-Jan-2020 21:56:36 Processing position 5 (1109_005)
07-Jan-2020 21:57:59 Processing position 6 (1109_006)
07-Jan-2020 21:59:08 Processing position 7 (1110_007)
07-Jan-2020 21:59:50 Processing position 8 (1110_008)
07-Jan-2020 22:00:41 Processing position 9 (1110_009)
07-Jan-2020 22:01:26 Successfully completed baby lineage extraction in 10.2 mins.
---------------------
=====================
07-Jan-2020 22:01:26 Start compiling cell information...
07-Jan-2020 22:01:28 Processing position 1 (1108_001)
07-Jan-2020 22:01:30 Processing position 2 (1108_002)
07-Jan-2020 22:01:33 Processing position 3 (1108_003)
07-Jan-2020 22:01:35 Processing position 4 (1109_004)
07-Jan-2020 22:01:38 Processing position 5 (1109_005)
07-Jan-2020 22:01:40 Processing position 6 (1109_006)
07-Jan-2020 22:01:42 Processing position 7 (1110_007)
07-Jan-2020 22:01:44 Processing position 8 (1110_008)
07-Jan-2020 22:01:46 Processing position 9 (1110_009)
07-Jan-2020 22:02:20 Successfully completed compiling cell information in 54 secs.
---------------------
Channels:
Channel name, Exposure time, Skip, Z sect., Start time, Camera mode, EM gain, Voltage
Brightfield, 30, 1, 1, 1, 2, 270, 1.000
GFPFast, 30, 1, 1, 1, 2, 270, 3.500
mCherry, 100, 1, 1, 1, 2, 270, 2.500
Z_sectioning:
Sections,Spacing,PFSon?,AnyZ?,Drift,Method
3, 0.80, 1, 1, 0, 2
Time_settings:
1,120,660,79200
Points:
Position name, X position, Y position, Z position, PFS offset, Group, Brightfield, GFPFast, mCherry
pos001, 568.00, 1302.00, 1876.500, 122.450, 1, 30, 30, 100
pos002, 1267.00, 1302.00, 1880.125, 119.950, 1, 30, 30, 100
pos003, 1026.00, 977.00, 1877.575, 120.100, 1, 30, 30, 100
pos004, 540.00, -347.00, 1868.725, 121.200, 2, 30, 30, 100
pos005, 510.00, -687.00, 1867.150, 122.900, 2, 30, 30, 100
pos006, -187.00, -470.00, 1864.050, 119.600, 2, 30, 30, 100
pos007, -731.00, 916.00, 1867.050, 117.050, 3, 30, 30, 100
pos008, -1003.00, 1178.00, 1866.425, 121.700, 3, 30, 30, 100
pos009, -568.00, 1157.00, 1868.450, 119.350, 3, 30, 30, 100
Flow_control:
Syringe pump details: 2 pumps.
Pump states at beginning of experiment:
Pump port, Diameter, Current rate, Direction, Running, Contents
COM7, 14.43, 0.00, INF, 1, 2% glucose in SC
COM8, 14.43, 4.00, INF, 1, 0.1% glucose in SC
Dynamic flow details:
Number of pump changes:
1
Switching parameters:
Infuse/withdraw volumes:
50
Infuse/withdraw rates:
100
Times:
0
Switched to:
2
Switched from:
1
Flow post switch:
0
4
This diff is collapsed.
2022-10-10 15:31:27,350 - INFO
Swain Lab microscope experiment log file
GIT commit: e5d5e33 fix: changes to a few issues with focus control on Batman.
Microscope name: Batman
Date: 022-10-10 15:31:27
Log file path: D:\AcquisitionDataBatman\Swain Lab\Ivan\RAW DATA\2022\Oct\10-Oct-2022\pH_med_to_low00\pH_med_to_low.log
Micromanager config file: C:\Users\Public\Microscope control\Micromanager config files\Batman_python_15_4_22.cfg
Omero project: Default project
Omero tags:
Experiment details: Effect on growth and cytoplasmic pH of switch from normal pH (4.25) media to higher pH (5.69). Switching is run using the Oxygen software
-----Acquisition settings-----
2022-10-10 15:31:27,350 - INFO Image Configs:
Image config,Channel,Description,Exposure (ms), Number of Z sections,Z spacing (um),Sectioning method
brightfield1,Brightfield,Default bright field config,30,5,0.6,PIFOC
pHluorin405_0_4,pHluorin405,Phluorin excitation from 405 LED 0.4v and 10ms exposure,5,1,0.6,PIFOC
pHluorin488_0_4,GFPFast,Phluorin excitation from 488 LED 0.4v,10,1,0.6,PIFOC
cy5,cy5,Default cy5,30,1,0.6,PIFOC
Device properties:
Image config,device,property,value
pHluorin405_0_4,DTOL-DAC-1,Volts,0.4
pHluorin488_0_4,DTOL-DAC-2,Volts,0.4
cy5,DTOL-DAC-3,Volts,4
2022-10-10 15:31:27,353 - INFO
group: YST_247 field: position
Name, X, Y, Z, Autofocus offset
YST_247_001,-8968,-3319,2731.125040696934,123.25
YST_247_002,-8953,-3091,2731.3000406995416,123.25
YST_247_003,-8954,-2849,2731.600040704012,122.8
YST_247_004,-8941,-2611,2730.7750406917185,122.8
YST_247_005,-8697,-2541,2731.4500407017767,118.6
group: YST_247 field: time
start: 0
interval: 300
frames: 180
group: YST_247 field: config
brightfield1: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin405_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin488_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
cy5: 0xfffffffffffffffffffffffffffffffffffffffffffff
2022-10-10 15:31:27,356 - INFO
group: YST_1510 field: position
Name,X,Y,Z,Autofocus offset
YST_1510_001,-6450,-230,2343.300034917891,112.55
YST_1510_002,-6450,-436,2343.350034918636,112.55
YST_1510_003,-6450,-639,2344.000034928322,116.8
YST_1510_004,-6450,-831,2344.250034932047,116.8
YST_1510_005,-6848,-536,2343.3250349182636,110
group: YST_1510 field: time
start: 0
interval: 300
frames: 180
group: YST_1510 field: config
brightfield1: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin405_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin488_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
cy5: 0xfffffffffffffffffffffffffffffffffffffffffffff
2022-10-10 15:31:27,359 - INFO
group: YST_1511 field: position
Name, X, Y, Z, Autofocus offset
YST_1511_001,-10618,-1675,2716.900040484965,118.7
YST_1511_002,-10618,-1914,2717.2250404898077,122.45
YST_1511_003,-10367,-1695,2718.2500405050814,120.95
YST_1511_004,-10367,-1937,2718.8250405136496,120.95
YST_1511_005,-10092,-1757,2719.975040530786,119.45
group: YST_1511 field: time
start: 0
interval: 300
frames: 180
group: YST_1511 field: config
brightfield1: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin405_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin488_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
cy5: 0xfffffffffffffffffffffffffffffffffffffffffffff
2022-10-10 15:31:27,362 - INFO
group: YST_1512 field: position
Name,X,Y,Z,Autofocus offset
YST_1512_001,-8173,-2510,2339.0750348549336,115.65
YST_1512_002,-8173,-2718,2338.0250348392874,110.8
YST_1512_003,-8173,-2963,2336.625034818426,110.8
YST_1512_004,-8457,-2963,2336.350034814328,110.9
YST_1512_005,-8481,-2706,2337.575034832582,113.3
group: YST_1512 field: time
start: 0
interval: 300
frames: 180
group: YST_1512 field: config
brightfield1: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin405_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin488_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
cy5: 0xfffffffffffffffffffffffffffffffffffffffffffff
2022-10-10 15:31:27,365 - INFO
group: YST_1513 field: position
Name,X,Y,Z,Autofocus offset
YST_1513_001,-6978,-2596,2339.8750348668545,113.3
YST_1513_002,-6978,-2380,2340.500034876168,113.3
YST_1513_003,-6971,-2163,2340.8750348817557,113.3
YST_1513_004,-6971,-1892,2341.2500348873436,113.3
YST_1513_005,-6692,-1892,2341.550034891814,113.3
group: YST_1513 field: time
start: 0
interval: 300
frames: 180
group: YST_1513 field: config
brightfield1: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin405_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
pHluorin488_0_4: 0xfffffffffffffffffffffffffffffffffffffffffffff
cy5: 0xfffffffffffffffffffffffffffffffffffffffffffff
2022-10-10 15:31:27,365 - INFO
2022-10-10 15:31:27,365 - INFO
-----Experiment started-----
from aliby.pipeline import PipelineParameters, Pipeline
params = PipelineParameters.default(
general={
"expt_id": 2172,
"distributed": 0,
"directory": ".",
"host": "staffa.bio.ed.ac.uk",
"username": pass,
"password": pass,
}
)
# specify OMERO_channels if the channels on OMERO have a different order from the logfiles
p = Pipeline(params)
p.run()
from pathlib import Path
from aliby.pipeline import Pipeline, PipelineParameters
from postprocessor.grouper import Grouper
omid = "25681_2022_04_30_flavin_htb2_glucose_10mgpL_01_00"
h5dir = "/Users/pswain/wip/aliby_output/"
omero_dir = "/Users/pswain/wip/aliby_input/"
# setup and run pipeline
params = PipelineParameters.default(
general={
"expt_id": omero_dir + omid,
"distributed": 2,
"directory": h5dir,
# specify position to segment
"filter": "fy4_007",
# specify final time point
# "tps": 4,
},
)
# initialise and run pipeline
p = Pipeline(params, OMERO_channels=["Brightfield", "Flavin"])
p.run()
File deleted