Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • swain-lab/aliby/aliby-mirror
  • swain-lab/aliby/alibylite
2 results
Show changes
Showing
with 602 additions and 10 deletions
# Running the analysis pipeline
You can run the analysis pipeline either via the command line interface (CLI) or using a script that incorporates the `aliby.pipeline.Pipeline` object.
## CLI
On a CLI, you can use the `aliby-run` command. This command takes options as follows:
- `--host`: Address of image-hosting server.
- `--username`: Username to access image-hosting server.
- `--password`: Password to access image-hosting server.
- `--expt_id`: Number ID of experiment stored on host server.
- `--distributed`: Number of distributed cores to use for segmentation and signal processing. If 0, there is no parallelisation.
- `--tps`: Optional. Number of time points from the beginning of the experiment to use. If not specified, the pipeline processes all time points.
- `--directory`: Optional. Parent directory to save the data files (HDF5) generated, `./data` by default; the files will be stored in a child directory whose name is the name of the experiment.
- `--filter`: Optional. List of positions to use for analysis. Alternatively, a regex (regular expression) or list of regexes to search for positions. **Note: for the CLI, currently it is not able to take a list of strings as input.**
- `--overwrite`: Optional. Whether to overwrite an existing data directory. True by default.
- `--override_meta`: Optional. Whether to overwrite an existing data directory. True by default.
Example usage:
```bash
aliby-run --expt_id EXPT_PATH --distributed 4 --tps None
```
And to run Omero servers, the basic arguments are shown:
```bash
aliby-run --expt_id XXX --host SERVER.ADDRESS --user USER --password PASSWORD
```
## Script
Use the `aliby.pipeline.Pipeline` object and supply a dictionary, following the example below. The meaning of the parameters are the same as described in the CLI section above.
```python
#!/usr/bin/env python3
from aliby.pipeline import Pipeline, PipelineParameters
# Specify experiment IDs
ids = [101, 102]
for i in ids:
print(i)
try:
params = PipelineParameters.default(
# Create dictionary to define pipeline parameters.
general={
"expt_id": i,
"distributed": 6,
"host": "INSERT ADDRESS HERE",
"username": "INSERT USERNAME HERE",
"password": "INSERT PASSWORD HERE",
# Ensure data will be overwriten
"override_meta": True,
"overwrite": True,
}
)
# Fine-grained control beyond general parameters:
# change specific leaf in the extraction tree.
# This example tells the pipeline to additionally compute the
# nuc_est_conv quantity, which is a measure of the degree of
# localisation of a signal in a cell.
params = params.to_dict()
leaf_to_change = params["extraction"]["tree"]["GFP"]["np_max"]
leaf_to_change.add("nuc_est_conv")
# Regenerate PipelineParameters
p = Pipeline(PipelineParameters.from_dict(params))
# Run pipeline
p.run()
# Error handling
except Exception as e:
print(e)
```
This example code can be the contents of a `run.py` file, and you can run it via
```bash
python run.py
```
in the appropriate virtual environment.
Alternatively, the example code can be the contents of a cell in a jupyter notebook.
......@@ -10,4 +10,7 @@
:recursive:
aliby
agora
extraction
postprocessor
logfile_parser
......@@ -48,7 +48,9 @@ html_show_sourcelink = (
False # Remove 'view source code' from top of page (for html, not python)
)
autodoc_inherit_docstrings = True # If no docstring, inherit from base class
set_type_checking_flag = True # Enable 'expensive' imports for sphinx_autodoc_typehints
set_type_checking_flag = (
True # Enable 'expensive' imports for sphinx_autodoc_typehints
)
nbsphinx_allow_errors = True # Continue through Jupyter errors
# autodoc_typehints = "description" # Sphinx-native method. Not as good as sphinx_autodoc_typehints
add_module_names = False # Remove namespaces from class/method signatures
......
......@@ -4,12 +4,15 @@
contain the root `toctree` directive.
.. toctree::
:hidden:
Home page <self>
ALIBY reference <_autosummary/aliby>
extraction reference <_autosummary/extraction>
Installation <INSTALL.md>
Pipeline options <PIPELINE.md>
Contributing <CONTRIBUTING.md>
..
Examples <examples.rst>
Reference <api.rst>
..
.. include:: ../../README.md
:parser: myst_parser.sphinx_
#+title: Input/Output Stage Dependencies
Overview of what fields are required for each consecutive step to run, and
- Registration
- Tiler
- Requires:
- None
# - Optionally:
- Produces:
- /trap_info
- Tiler
- Requires:
- None
- Produces:
- /trap_info
#+title: Aliby metadata specification
Draft for recommended metadata for images to provide a standard interface for aliby. I attempt to follow OMERO metadata structures.
* Essential data
- DimensionOrder: str
Order of dimensions (e.g., TCZYX for Time, Channel, Z, Y, X)
- PixelSize: float
Size of pixel, useful for segmentation.
- Channels: List[str]
Channel names, used to refer to as parameters.
* Optional but useful data
- ntps: int
Number of time-points
- Date
Date of experiment
- interval: float
Time interval when the experiment has a constant acquisition time. If it changes depending on the position or it is a dynamic experiment, this is the maximum number that can divide all different conditions.
- Channel conditions: DataFrame
Dataframe with acquisition features for each image as a function of a minimal time interval unit.
- Group config: DataFrame
If multiple groups are used, it indicates the time-points at which the corresponding channel was acquired.
- LED: List[str]
Led names. Useful when images are acquired with the same LED and filter but multiple voltage conditions.
- Filter: List[str]
Filter names. Useful when images are acquired with the same LED and filter but multiple voltage conditions.
- tags : List[str]
Tags associated with the experiment. Useful for semi-automated experiment exploration.
- Experiment-wide groups: List[int]
List of groups for which each position belongs.
- Group names: List[str]
List of groups
* Optional
- hardware information : Dict[str, str]
Name of all hardware used to acquire images.
- Acquisition software and version: Tuple[str,str]
- Experiment start: date
- Experiment end: date
#+title: ALIBY roadmap
Overview of potential improvements, goals, issues and other thoughts worth keeping in the repository. In general, it is things that the original developer would have liked to implement had there been enough time.
* General goals
- Simplify code base
- Reduce dependency on BABY
- Abstract components beyond cell outlines (i.e, vacuole, or other ROIs)
- Enable providing metadata defaults (remove dependency of metadata)
- (Relevant to BABY): Migrate aliby-baby to Pytorch from Keras. Immediately after upgrade h5py to the latest version (we are stuck in 2.10.0 due to Keras).
* Long-term tasks (Soft Eng)
- Support external segmentation/tracking/lineage/processing tools
- Split segmentation, tracking and lineage into independent Steps
- Implement the pipeline as an acyclic graph
- Isolate lineage and tracking into a section of aliby or an independent package
- Abstract cells into "ROIs" or "Outlines"
- Abstract lineage into "Outline relationships" (this may help study cell-to-cell interactions in the future)
- Add support to next generation microscopy formats.
- Make live cell processing great again! (low priority)
* Potential features
- Flat field correction (requires research on what is the best way to do it)
- Support for monotiles (e.g., agarose pads)
- Support the user providing location of tiles (could be a GUI in which the user selects a region)
- Support multiple neural networks (e.g., vacuole/nucleus in adition to cell segmentation)
- Use CellPose as a backup for accuracy-first pipelines
* Potential CLI(+matplotlib) interfaces
The fastest way to get a gui-like interface is by using matplotlib as a panel to update and read keyboard inputs to interact with the data. All of this can be done within matplotlib in a few hundreds of line of code.
- Annotate intracellular contents
- Interface to adjust the parameters for calibration
- Basic selection of region of interest in a per-position basis
* Sections in need of refactoring
** Extraction
Extraction could easily increase its processing speed. Most of the code was not originally written using casting and vectorised operations.
- Reducing the use of python loops to the minimum
- Replacing nested functions with functional mappings (extraction be faster and clearer with a functional programming approach)
- Replacing the tree with a set of tuples and delegating processing order to dask.
Dask can produce its own internal tree and optimise the order of rendering the tree unnecessary
** Postprocessing.
- Clarify the limits of picking and merging classes: These are temporal procedures; in the future segmentation should become more accurate, making picking Picker redundant; better tracking/lineage assignemnt will make merging redundant.
- Formalise how lineage and reshaper processes are handled
- Non-destructive postprocessing.
The way postprocessing is done is destructive at the moment. If we aim to perform more complex data analysis automatically an implementation of complementary and tractable sub-pipelines is essential. (low priority, perhaps within scripts)
- Functionalise parameter-process schema. This schema provides a decent structure, but it requires a lot of boilerplate code. To transition the best option is probably a function that converts Process classes into a function, and another that extracts default values from a Parameters class. This could in theory replace most Process-Parameters pairs. Lineage functions will pose a problem and a common interface to get lineage or outline-to-outline relationships demands to be engineered.
** Compiler/Reporter
- Remove compiler step, and focus on designing an adequate report, then build it straight after postprocessing ends.
** Writers/Readers
- Consider storing signals that are similar (e.g., signals arising from each channel) in a single multidimensional array to save storage space. (mid priority)
- Refactor (Extraction/Postprocessing) Writer to use the DynamicWriter Abstract Base Class.
** Pipeline
Pipeline is in dire need of refactoring, as it coordinates too many things. The best approach would be to modify the structure to delegate more responsibilities to Steps (such as validation) and Writers (such as writing metadata).
* Testing
- I/O interfaces
- Visualisation helpers and other functions
- Running one pipeline from another
- Groupers
* Documentation
- Tutorials and how-to for the usual tasks
- How to deal with different types of data
- How to aggregate data from multiple experiments
- Contribution guidelines (after developing some)
* Tools/alternatives that may be worth considering for the future
- trio/asyncio/anyio for concurrent processing of individual threads
- Pandas -> Polars: Reconsider after pandas 2.0; they will become interoperable
- awkward arrays: Better way to represent data series with different sizes
- h5py -> zarr: OME-ZARR format is out now, it is possible that the field will move in that direction. This would also make us being stuck in h5py 2.10.0 less egregious.
- Use CellACDC's work on producing a common interface to access a multitude of segmentation algorithms.
* Secrets in the code
- As aliby is adapted to future Python versions, keep up with the "FUTURE" statements that enunciate how code can be improved in new python version
- Track FIXMEs and, if we cannot solve them immediately, open an associated issue
* Minor inconveniences to fix
- Update CellTracker models by training with current scikit-learn (currently it warns that the models were trained in an older version of sklearn )
from extraction.core.parameters import Parameters
from extraction.core.extractor import Extractor
import numpy as np
from extraction.core.parameters import Parameters
params = Parameters(
tree={
......@@ -15,6 +14,4 @@ params = Parameters(
ext = Extractor(params, omero_id=19310)
# ext.extract_exp(tile_size=117)
d=ext.extract_tp(tp=1,tile_size=117)
d = ext.extract_tp(tp=1, tile_size=117)
import matplotlib.pyplot as plt
from core.experiment import Experiment
from core.segment import Tiler
expt = Experiment.from_source(
19310, # Experiment ID on OMERO
"upload", # OMERO Username
"***REMOVED***", # OMERO Password
"islay.bio.ed.ac.uk", # OMERO host
port=4064, # This is default
)
# Load whole position
img = expt[0, 0, :, :, 2]
plt.imshow(img[0, 0, ..., 0])
plt.show()
# Manually get template
tilesize = 117
x0 = 827
y0 = 632
trap_template = img[0, 0, x0 : x0 + tilesize, y0 : y0 + tilesize, 0]
plt.imshow(trap_template)
plt.show()
tiler = Tiler(expt, template=trap_template)
# Load images (takes about 5 mins)
trap_tps = tiler.get_tiles_timepoint(0, tile_size=117, z=[2])
# Plot found traps
nrows, ncols = (5, 5)
fig, axes = plt.subplots(nrows, ncols)
for i in range(nrows):
for j in range(ncols):
if i * nrows + j < trap_tps.shape[0]:
axes[i, j].imshow(trap_tps[i * nrows + j, 0, 0, ..., 0])
plt.show()
=====================
06-Jan-2020 18:30:59 Start creating new experiment using parameters:
Omero experiment name: 001
Temporary working directory: C:06-Jan-2020 18:30:59 Processing position 2 (1108_002)
06-Jan-2020 18:31:00 Processing position 3 (1108_003)
06-Jan-2020 18:31:01 Processing position 4 (1109_004)
06-Jan-2020 18:31:02 Processing position 5 (1109_005)
06-Jan-2020 18:31:04 Processing position 6 (1109_006)
06-Jan-2020 18:31:05 Processing position 7 (1110_007)
06-Jan-2020 18:31:06 Processing position 8 (1110_008)
06-Jan-2020 18:31:07 Processing position 9 (1110_009)
06-Jan-2020 18:31:10 Successfully completed creating new experiment in 11 secs.
---------------------
=====================
06-Jan-2020 18:31:33 Start selecting traps...
06-Jan-2020 18:31:33 Processing position 1 (1108_001)
06-Jan-2020 18:31:40 Remove trap at 550 1188
06-Jan-2020 18:31:40 Remove trap at 733 1179
06-Jan-2020 18:31:41 Remove trap at 384 1189
06-Jan-2020 18:31:42 Remove trap at 201 1186
06-Jan-2020 18:31:47 Processing position 2 (1108_002)
06-Jan-2020 18:31:52 Remove trap at 384 1060
06-Jan-2020 18:31:54 Remove trap at 1081 571
06-Jan-2020 18:32:01 Processing position 3 (1108_003)
06-Jan-2020 18:32:05 Remove trap at 948 1140
06-Jan-2020 18:32:06 Remove trap at 1141 1174
06-Jan-2020 18:32:17 Remove trap at 139 1111
06-Jan-2020 18:32:18 Add trap at 130 1138
06-Jan-2020 18:32:26 Processing position 4 (1109_004)
06-Jan-2020 18:32:32 Remove trap at 1176 388
06-Jan-2020 18:32:39 Processing position 5 (1109_005)
06-Jan-2020 18:32:44 Remove trap at 1141 1135
06-Jan-2020 18:32:51 Remove trap at 955 379
06-Jan-2020 18:32:55 Processing position 6 (1109_006)
06-Jan-2020 18:33:00 Remove trap at 676 1177
06-Jan-2020 18:33:01 Remove trap at 1111 1147
06-Jan-2020 18:33:14 Processing position 7 (1110_007)
06-Jan-2020 18:33:20 Remove trap at 46 46
06-Jan-2020 18:33:28 Remove trap at 1150 84
06-Jan-2020 18:33:34 Processing position 8 (1110_008)
06-Jan-2020 18:33:49 Processing position 9 (1110_009)
06-Jan-2020 18:33:55 Add trap at 1153 1129
06-Jan-2020 18:33:57 Remove trap at 1135 1141
06-Jan-2020 18:33:57 Remove trap at 1176 1095
06-Jan-2020 18:34:15 Successfully completed selecting traps in 2.7 mins.
---------------------
=====================
06-Jan-2020 18:34:28 Start setting extraction parameters using parameters:
extractionParameters: {
extractFunction: extractCellDataStandardParfor
functionParameters: {
type: max
channels: 2 3
nuclearMarkerChannel: NaN
maxPixOverlap: 5
maxAllowedOverlap: 25
}
}
06-Jan-2020 18:34:28 Processing position 1 (1108_001)
06-Jan-2020 18:34:28 Processing position 2 (1108_002)
06-Jan-2020 18:34:29 Processing position 3 (1108_003)
06-Jan-2020 18:34:29 Processing position 4 (1109_004)
06-Jan-2020 18:34:30 Processing position 5 (1109_005)
06-Jan-2020 18:34:30 Processing position 6 (1109_006)
06-Jan-2020 18:34:30 Processing position 7 (1110_007)
06-Jan-2020 18:34:31 Processing position 8 (1110_008)
06-Jan-2020 18:34:31 Processing position 9 (1110_009)
06-Jan-2020 18:34:33 Successfully completed setting extraction parameters in 5 secs.
---------------------
=====================
07-Jan-2020 13:17:43 Start tracking traps in time...
07-Jan-2020 13:17:43 Processing position 1 (1108_001)
07-Jan-2020 13:23:31 Processing position 2 (1108_002)
07-Jan-2020 13:29:21 Processing position 3 (1108_003)
07-Jan-2020 13:35:13 Processing position 4 (1109_004)
07-Jan-2020 13:41:19 Processing position 5 (1109_005)
07-Jan-2020 13:47:09 Processing position 6 (1109_006)
07-Jan-2020 13:52:57 Processing position 7 (1110_007)
07-Jan-2020 13:58:41 Processing position 8 (1110_008)
07-Jan-2020 14:04:41 Processing position 9 (1110_009)
07-Jan-2020 14:10:38 Successfully completed tracking traps in time in 52.9 mins.
---------------------
=====================
07-Jan-2020 14:10:38 Start baby segmentation...
07-Jan-2020 14:10:39 Processing position 1 (1108_001)
07-Jan-2020 14:14:32 cTimelapse: 210.344 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:18:30 cTimelapse: 240.345 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:22:31 cTimelapse: 272.459 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:26:32 cTimelapse: 303.876 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:30:34 cTimelapse: 336.470 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:32:32 Processing position 2 (1108_002)
07-Jan-2020 14:36:22 cTimelapse: 206.699 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:40:12 cTimelapse: 235.726 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:44:13 cTimelapse: 268.814 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:48:27 cTimelapse: 306.046 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:52:43 cTimelapse: 343.681 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 14:54:44 Processing position 3 (1108_003)
07-Jan-2020 14:58:47 cTimelapse: 214.895 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:02:44 cTimelapse: 247.137 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:06:47 cTimelapse: 280.902 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:10:51 cTimelapse: 314.796 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:15:13 cTimelapse: 354.774 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:17:16 Processing position 4 (1109_004)
07-Jan-2020 15:21:06 cTimelapse: 222.663 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:25:09 cTimelapse: 253.596 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:29:16 cTimelapse: 286.597 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:33:46 cTimelapse: 325.040 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:38:35 cTimelapse: 369.190 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:40:50 Processing position 5 (1109_005)
07-Jan-2020 15:45:01 cTimelapse: 235.110 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:49:23 cTimelapse: 268.760 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:53:50 cTimelapse: 304.703 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 15:58:15 cTimelapse: 339.861 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:02:47 cTimelapse: 377.877 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:04:53 Processing position 6 (1109_006)
07-Jan-2020 16:08:32 cTimelapse: 205.246 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:12:09 cTimelapse: 231.500 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:15:49 cTimelapse: 259.276 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:19:45 cTimelapse: 291.813 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:24:03 cTimelapse: 331.193 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:26:11 Processing position 7 (1110_007)
07-Jan-2020 16:29:30 cTimelapse: 222.990 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:32:46 cTimelapse: 238.288 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:36:03 cTimelapse: 255.524 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:39:21 cTimelapse: 275.165 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:42:40 cTimelapse: 297.244 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:44:14 Processing position 8 (1110_008)
07-Jan-2020 16:47:32 cTimelapse: 215.583 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:50:51 cTimelapse: 235.959 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:54:09 cTimelapse: 256.409 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 16:57:25 cTimelapse: 275.563 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 17:00:43 cTimelapse: 296.390 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 17:02:13 Processing position 9 (1110_009)
07-Jan-2020 17:05:35 cTimelapse: 225.847 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 17:08:54 cTimelapse: 245.291 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 17:12:17 cTimelapse: 266.060 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 17:15:41 cTimelapse: 288.448 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 17:19:04 cTimelapse: 311.290 MB; posOverviewGUI: 741.580 MB
07-Jan-2020 17:20:36 Successfully completed baby segmentation in 3.2 hours.
---------------------
=====================
07-Jan-2020 17:20:37 Start tracking cells using parameters:
Tracking threshold: 10
07-Jan-2020 17:20:39 Processing position 1 (1108_001)
07-Jan-2020 17:21:23 Processing position 2 (1108_002)
07-Jan-2020 17:22:06 Processing position 3 (1108_003)
07-Jan-2020 17:22:49 Processing position 4 (1109_004)
07-Jan-2020 17:23:33 Processing position 5 (1109_005)
07-Jan-2020 17:24:18 Processing position 6 (1109_006)
07-Jan-2020 17:24:58 Processing position 7 (1110_007)
07-Jan-2020 17:25:25 Processing position 8 (1110_008)
07-Jan-2020 17:25:55 Processing position 9 (1110_009)
07-Jan-2020 17:26:25 Successfully completed tracking cells in 5.8 mins.
---------------------
=====================
07-Jan-2020 17:26:25 Start autoselecting cells using parameters:
Fraction of timelapse that cells are present for: 0.5
Number of frames a cell must be present: 540
Cell must appear by frame: 540
Cell must still be present by frame: 1
Maximum number of cells: Inf
07-Jan-2020 17:26:27 Processing position 1 (1108_001)
07-Jan-2020 17:26:42 Processing position 2 (1108_002)
07-Jan-2020 17:26:58 Processing position 3 (1108_003)
07-Jan-2020 17:27:14 Processing position 4 (1109_004)
07-Jan-2020 17:27:31 Processing position 5 (1109_005)
07-Jan-2020 17:27:48 Processing position 6 (1109_006)
07-Jan-2020 17:28:03 Processing position 7 (1110_007)
07-Jan-2020 17:28:13 Processing position 8 (1110_008)
07-Jan-2020 17:28:25 Processing position 9 (1110_009)
07-Jan-2020 17:28:36 Successfully completed autoselecting cells in 2.2 mins.
---------------------
=====================
07-Jan-2020 17:28:37 Start extracting cell information...
07-Jan-2020 17:28:39 Processing position 1 (1108_001)
07-Jan-2020 17:58:38 Processing position 2 (1108_002)
07-Jan-2020 18:28:43 Processing position 3 (1108_003)
07-Jan-2020 18:58:45 Processing position 4 (1109_004)
07-Jan-2020 19:29:03 Processing position 5 (1109_005)
07-Jan-2020 19:59:31 Processing position 6 (1109_006)
07-Jan-2020 20:29:01 Processing position 7 (1110_007)
07-Jan-2020 20:56:05 Processing position 8 (1110_008)
07-Jan-2020 21:23:53 Processing position 9 (1110_009)
07-Jan-2020 21:51:15 Successfully completed extracting cell information in 4.4 hours.
---------------------
=====================
07-Jan-2020 21:51:16 Start baby lineage extraction...
07-Jan-2020 21:51:18 Processing position 1 (1108_001)
07-Jan-2020 21:52:37 Processing position 2 (1108_002)
07-Jan-2020 21:53:57 Processing position 3 (1108_003)
07-Jan-2020 21:55:16 Processing position 4 (1109_004)
07-Jan-2020 21:56:36 Processing position 5 (1109_005)
07-Jan-2020 21:57:59 Processing position 6 (1109_006)
07-Jan-2020 21:59:08 Processing position 7 (1110_007)
07-Jan-2020 21:59:50 Processing position 8 (1110_008)
07-Jan-2020 22:00:41 Processing position 9 (1110_009)
07-Jan-2020 22:01:26 Successfully completed baby lineage extraction in 10.2 mins.
---------------------
=====================
07-Jan-2020 22:01:26 Start compiling cell information...
07-Jan-2020 22:01:28 Processing position 1 (1108_001)
07-Jan-2020 22:01:30 Processing position 2 (1108_002)
07-Jan-2020 22:01:33 Processing position 3 (1108_003)
07-Jan-2020 22:01:35 Processing position 4 (1109_004)
07-Jan-2020 22:01:38 Processing position 5 (1109_005)
07-Jan-2020 22:01:40 Processing position 6 (1109_006)
07-Jan-2020 22:01:42 Processing position 7 (1110_007)
07-Jan-2020 22:01:44 Processing position 8 (1110_008)
07-Jan-2020 22:01:46 Processing position 9 (1110_009)
07-Jan-2020 22:02:20 Successfully completed compiling cell information in 54 secs.
---------------------
Channels:
Channel name, Exposure time, Skip, Z sect., Start time, Camera mode, EM gain, Voltage
Brightfield, 30, 1, 1, 1, 2, 270, 1.000
GFPFast, 30, 1, 1, 1, 2, 270, 3.500
mCherry, 100, 1, 1, 1, 2, 270, 2.500
Z_sectioning:
Sections,Spacing,PFSon?,AnyZ?,Drift,Method
3, 0.80, 1, 1, 0, 2
Time_settings:
1,120,660,79200
Points:
Position name, X position, Y position, Z position, PFS offset, Group, Brightfield, GFPFast, mCherry
pos001, 568.00, 1302.00, 1876.500, 122.450, 1, 30, 30, 100
pos002, 1267.00, 1302.00, 1880.125, 119.950, 1, 30, 30, 100
pos003, 1026.00, 977.00, 1877.575, 120.100, 1, 30, 30, 100
pos004, 540.00, -347.00, 1868.725, 121.200, 2, 30, 30, 100
pos005, 510.00, -687.00, 1867.150, 122.900, 2, 30, 30, 100
pos006, -187.00, -470.00, 1864.050, 119.600, 2, 30, 30, 100
pos007, -731.00, 916.00, 1867.050, 117.050, 3, 30, 30, 100
pos008, -1003.00, 1178.00, 1866.425, 121.700, 3, 30, 30, 100
pos009, -568.00, 1157.00, 1868.450, 119.350, 3, 30, 30, 100
Flow_control:
Syringe pump details: 2 pumps.
Pump states at beginning of experiment:
Pump port, Diameter, Current rate, Direction, Running, Contents
COM7, 14.43, 0.00, INF, 1, 2% glucose in SC
COM8, 14.43, 4.00, INF, 1, 0.1% glucose in SC
Dynamic flow details:
Number of pump changes:
1
Switching parameters:
Infuse/withdraw volumes:
50
Infuse/withdraw rates:
100
Times:
0
Switched to:
2
Switched from:
1
Flow post switch:
0
4