aliby issueshttps://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues2023-01-13T20:36:42+00:00https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/60DOCS: Overview/logic of components of aliby & BABY2023-01-13T20:36:42+00:00Arin Wongprommoonarin.wongprommoon@ed.ac.ukDOCS: Overview/logic of components of aliby & BABY## Summary
Documentation to show overview/logic of components of aliby & BABY.
## Current behaviour/setbacks
Existing documentation does not explain individual components of aliby & BABY, and thus the typical user does not understand ...## Summary
Documentation to show overview/logic of components of aliby & BABY.
## Current behaviour/setbacks
Existing documentation does not explain individual components of aliby & BABY, and thus the typical user does not understand how parts fit together. We already have such issues with developers.
## Desired behaviour/advantages
It will give context to the parts, especially important now that they are becoming more modularised (see %"Segmentation (Baby refactoring)" )
## Implementation sketch
- [ ] Add or clarify documentation/docstrings to key modules (`image`, `tiler/traps`, `extractor`, etc.) so that they are reflected in the Sphix-generated pages.
- [ ] @dadjavon to add general notes that gives an overview of aliby & BABY architecture.User-facing documentationhttps://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/59DOCS: How to view & analyse data2023-01-18T14:50:25+00:00Arin Wongprommoonarin.wongprommoon@ed.ac.ukDOCS: How to view & analyse data## Summary
Documentation on how to view & analyse data.
## Current behaviour/setbacks
This does not currently exist in the documentation. This is bad because after running the pipeline, it will be the second thing a typical end user ...## Summary
Documentation on how to view & analyse data.
## Current behaviour/setbacks
This does not currently exist in the documentation. This is bad because after running the pipeline, it will be the second thing a typical end user would want to do.
## Desired behaviour/advantages
Have examples of how to use various classes (especially in `postprocessor`) to view & analyse data.
## Implementation sketch
- [x] Add more documentation in markdown cells in existing data wrangling/plotting notebooks in `skeletons`. @amuoz and @s1947236 to work together on this as they have diverging notebooks for their own uses, each with their own explanations.User-facing documentationhttps://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/58DOCS: Tutorial on how to run a pipeline2023-01-13T20:33:12+00:00Arin Wongprommoonarin.wongprommoon@ed.ac.ukDOCS: Tutorial on how to run a pipeline## Summary
Tutorial on how to run a pipeline to segment an experiment.
## Current behaviour/setbacks
This tutorial does not currently exist in the main documentation; therefore, users don't have an easy entry point into using aliby.
...## Summary
Tutorial on how to run a pipeline to segment an experiment.
## Current behaviour/setbacks
This tutorial does not currently exist in the main documentation; therefore, users don't have an easy entry point into using aliby.
## Desired behaviour/advantages
Have instruction on how to run a segmentation pipeline.
New instructions should replace the current 'quickstart documentation' which addresses setting up an Omero sever, data access, tiling, etc. -- these tasks are more advanced and could be moved to elsewhere in the documentation.
## Implementation sketch
- [ ] Write documentation in `docs/` and include code based on the `run.py` code.
- [ ] Add docstrings to explain general parameters in the `PipelineParameters` object, so that they are included in the Sphix documentation. Then create a link on the run-pipeline page that refers to the page that explains the general parameters.User-facing documentationhttps://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/57DOCS: Installation instructions2023-01-17T15:02:21+00:00Arin Wongprommoonarin.wongprommoon@ed.ac.ukDOCS: Installation instructions## Summary
Add documentation on how to install aliby on various operating systems.
## Current behaviour/setbacks
Existing documentation works for Linux, but we've since uncovered issues with installing on macOS (#48, #56) and Windows ...## Summary
Add documentation on how to install aliby on various operating systems.
## Current behaviour/setbacks
Existing documentation works for Linux, but we've since uncovered issues with installing on macOS (#48, #56) and Windows (#54)
## Desired behaviour/advantages
Documentation will include installation on other operating systems, thus reducing barrier to access.
## Implementation sketch
- Create sections within https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/blob/dev/docs/source/INSTALL.md for each of Linux, macOS, and Windows operating systems.
- These should instructions spelled out clearly enough so that a competent user can easily install aliby.
- If appropriate, include links to e.g. `poetry`, `conda`/`mamba` documentation to reduce the amount of text.
- Potentially also include FAQs, e.g. common installation issues, and how to fix them.User-facing documentationv1iclar2v1iclar2https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/56Illegal instruction error on Mac (M1)2023-01-16T15:57:01+00:00dadjavonIllegal instruction error on Mac (M1)Running the following test on the M1 Mac (Desktop) leads to an `Illegal instruction 4` error.
```
#!/usr/bin/env python3
from aliby.pipeline import PipelineParameters, Pipeline
params = PipelineParameters.default(general={"expt_id": ...Running the following test on the M1 Mac (Desktop) leads to an `Illegal instruction 4` error.
```
#!/usr/bin/env python3
from aliby.pipeline import PipelineParameters, Pipeline
params = PipelineParameters.default(general={"expt_id": 560,
"distributed": 0,
"server_info": { "host": "staffa.bio.ed.ac.uk",
"username": **REMOVED**,
"password": **REMOVED**,},
})
p = Pipeline(params)
p.run()
```
@fwaharte1 is going to test it on an Intel MacBook to check whether this is M1-specific.Segmentation (Baby refactoring)https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/54Installing aliby on Windows2023-01-24T14:49:54+00:00v1iclar2Installing aliby on WindowsI'm on origin/dev and getting this error while trying to set up a test segmentation
```
from aliby.pipeline import PipelineParameters, Pipeline
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Pub...I'm on origin/dev and getting this error while trying to set up a test segmentation
```
from aliby.pipeline import PipelineParameters, Pipeline
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Public\Repos\aliby\src\aliby\pipeline.py", line 21, in <module>
from agora.abc import ParametersABC, ProcessABC
File "C:\Users\Public\Repos\aliby\src\agora\abc.py", line 8, in <module>
from flatten_dict import flatten
ModuleNotFoundError: No module named 'flatten_dict'
```https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/51Add a dummy run to the Pipeline2023-01-23T15:27:54+00:00dadjavonAdd a dummy run to the Pipeline## Summary
Add an initial time-step "dummy" run of the pipeline with a known output to check for parametrisation bugs.
## Current behaviour/setbacks
If a run fails, there is no straight-forward way to know whether it is a parametrisat...## Summary
Add an initial time-step "dummy" run of the pipeline with a known output to check for parametrisation bugs.
## Current behaviour/setbacks
If a run fails, there is no straight-forward way to know whether it is a parametrisation problem or a deeper bug. This makes debugging difficult especially with an incomplete knowledge of the pipeline.
## Desired behaviour/advantages
Before fully running a pipeline, we would run a few steps with spoofed "dummy" data, with a predictable output. If the pipeline fails during the dummy run, it should be due to a missing or incorrect parameter. The obtained output can be compared to the expected output for easier debugging.
## Implementation sketch
- Create a sample dataset that can be used for calibration; this could be the same as the unit test dataset
- Create way to modify the dataset to match the requested parameters (e.g. fake channels, resize images)
- Create a `calibrate` method in the pipeline that saves results to a temporary file
- if the results are correct: delete the temporary file and run the rest
- if the results are incorrect: save the temporary file (for manual inspection) and error outPipeline refactoringhttps://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/50Create abstract runner structure2023-01-11T15:52:25+00:00dadjavonCreate abstract runner structure## Summary
This is a subtask of #49
## Current behaviour/setbacks
The current version of the `BabyRunner` allows us to pick, choose, and format the BABY output as required for the most part, but its is fixed.
## Desired behaviour/ad...## Summary
This is a subtask of #49
## Current behaviour/setbacks
The current version of the `BabyRunner` allows us to pick, choose, and format the BABY output as required for the most part, but its is fixed.
## Desired behaviour/advantages
In order to add more model runners (including, potentially, the tracker and the bud assignment) we need:
- [ ] a general description of a `Runner` object (API)
- [ ] to update the `BabyRunner` to do segmentation only
- [ ] to create the `TrackRunner` and `LineageRunner` to match output currently provided by BABY
- [ ] to update/provide the corresponding `Writer` objects
## Implementation sketch
Initial sketch [here](https://git.ecdf.ed.ac.uk/swain-lab/aliby/skeletons/-/blob/7c9cd6f658a84c390f79fa5d84f23c4bb819cdde/scripts/dev/dev_segmentation_api.py).Segmentation (Baby refactoring)dadjavondadjavonhttps://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/49Abstract segmentation into a general structure2023-01-13T20:36:43+00:00swainlabAbstract segmentation into a general structure## Summary
To allow us to use segmentation algorithms other than BABY, we need a standard interface for inputing image to the segmentation algorithm, and a fixed expected output that can be used by modules downstream.
## Current behavi...## Summary
To allow us to use segmentation algorithms other than BABY, we need a standard interface for inputing image to the segmentation algorithm, and a fixed expected output that can be used by modules downstream.
## Current behaviour/setbacks
Currently, the ALIBY Pipeline considers BABY as an atomic unit that does segmentation, tracking, and lineage assignment. This means that none of these parts can be independently modified or removed.
## Desired behaviour/advantages
The full data flow (up until lineage assignment) is described below. Nothing in extraction or post-processing needs to be changed.
```mermaid
graph LR;
Data-->Registration;
Registration-->Tiling;
Tiling-->Segmentation;
Segmentation-->Tracking;
Segmentation-->Lineage;
Tracking-->Lineage;
```
The main changes in this situation would be:
- `Segmentation` can be multiple modules. Each takes an array of tiles and produces a dictionary of named masks
- BABY produces a cell outline and a bud neck mask
- e.g. MABY produces a nucleus mask and a vacuole mask
- The outputs to the `Segmentation`
- `Tracking` takes a dictionary of named masks as input
## Implementation sketch
- Define an abstract class that defines the expectations that we have of `Segmentation` modules. See [here](https://git.ecdf.ed.ac.uk/swain-lab/aliby/skeletons/-/blob/alan/scripts/dev/dev_abstract_segmentator.py) for an example.
- Define a standard for mask names (e.g. `cell_mask`, `bud_neck_mask`) that can be used by a downstream `Tracker` or `Lineage` object
- Update [the pipeline](https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/blob/dev/src/aliby/pipeline.py) to separate the segmentation configuration from the tracking configuration and the lineage assignment configuration
- Instantiate a `Segmentation` module for BABY
- (Optional) Instantiate a `Segmentation` module for a different algorithmSegmentation (Baby refactoring)dadjavondadjavonhttps://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/41ValueError raised when staffa:589 segmented2023-01-04T14:08:09+00:00Alán MuñozValueError raised when staffa:589 segmentedCompendium of multiple bugs that are simple to fix.
Experiment 589 (pH downshift, 180 time points) needs further investigation
```
<traceback object at 0x7f6cc47ef880>
Caught exception in worker thread (x = YST_1510_001):
Traceback (mo...Compendium of multiple bugs that are simple to fix.
Experiment 589 (pH downshift, 180 time points) needs further investigation
```
<traceback object at 0x7f6cc47ef880>
Caught exception in worker thread (x = YST_1510_001):
Traceback (most recent call last):
File "/home/alan/Documents/dev/libs/aliby/src/aliby/pipeline.py", line 511, in create_pipeline
PostProcessor(filename, post_proc_params).run()
File "/home/alan/Documents/dev/libs/aliby/src/postprocessor/core/processor.py", line 341, in run
result = loaded_process.run(signal)
File "/home/alan/Documents/dev/libs/aliby/src/postprocessor/core/reshapers/bud_metric.py", line 39, in run
return self.get_bud_metric(signal, mother_bud_ids)
File "/home/alan/Documents/dev/libs/aliby/src/postprocessor/core/reshapers/bud_metric.py", line 70, in get_bud_metric
buds_metric = np.choose(tp_fvt, sorted_da_ids.values)
File "<__array_function__ internals>", line 180, in choose
File "/home/alan/.pyenv/versions/aliby/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 429, in choose
return _wrapfunc(a, 'choose', choices, out=out, mode=mode)
File "/home/alan/.pyenv/versions/aliby/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 57, in _wrapfunc
return bound(*args, **kwds)
ValueError: Need at least 0 and at most 32 array objects.
```
How to replicate? Run this script
```python
from aliby.pipeline import PipelineParameters, Pipeline
ids = [
589,
]
filters = {
589:"YST_1510_001",
}
failures = []
for i in ids:
print(i)
try:
params = PipelineParameters.default(
general={
"expt_id": i,
# "tps": 10,
# "directory": "checks",
"distributed": 5,
"filter": filters.get(i, ""),
"earlystop": dict(
min_tp=2000,
thresh_pos_clogged=0.3,
thresh_trap_clogged=7,
ntps_to_eval=5,
),
"override_meta": True,
"overwrite": True,
"server_info": {
"host": "XXXX",
"username": "upload",
"password": "pass",
},
#
},
tiler={"tile_size": 117},
)
p = Pipeline(params)
p.run()
except Exception as e:
raise e
print(e)
failures.append(i)
with open("failures.txt", "w") as f:
for fail in failures:
f.write(fail)
```https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/23Tests in need of reimplementation2023-01-04T14:26:47+00:00Alán MuñozTests in need of reimplementationI am about to delete a bunch of tests that used the now extinct Experiment class. This means that there will be no registry of these. To keep tabs on which ones we need to reimplement I'll add them as bulletpoints.
Unit tests
- [ ] Tes...I am about to delete a bunch of tests that used the now extinct Experiment class. This means that there will be no registry of these. To keep tabs on which ones we need to reimplement I'll add them as bulletpoints.
Unit tests
- [ ] Test a hdf5 file and their associated images.
- metadata (channels, time_settings, pixel_size, sectioning)
- [ ] Metadata writing test
- [ ] Tiling-only test
- [ ] Extraction-only test
- [ ] Remote test using public server
Integration tests
- [ ] Metadata + tiling
- [ ] Metadata + tiling + segmentation
- [ ] Metadata + tiling + segmentation + extraction
- [ ] Metadata + tiling + segmentation + extraction + postprocessinghttps://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/19Switching nuc_est_conv and max projection order2022-09-27T11:31:55+01:00Arin Wongprommoonarin.wongprommoon@ed.ac.ukSwitching nuc_est_conv and max projection order## Summary
We want to try investigating whether swapping operations in computing `nuc_est_conv` across z-stacks improves identification of protein localisation.
## Current behaviour/setbacks
`nuc_est_conv` isn't computed by default d...## Summary
We want to try investigating whether swapping operations in computing `nuc_est_conv` across z-stacks improves identification of protein localisation.
## Current behaviour/setbacks
`nuc_est_conv` isn't computed by default during extraction.
## Desired behaviour/advantages
1. Compute `nuc_est_conv` as an additional measure for an experiment of interest. Then go through the usual extraction and post-processing routines.
2. Investigate whether (a) finding max projection across z-stacks then computing `nucEstConv` or (b) computing `nucEstConv` for each z-stick then finding max projection across the time series does better in terms of identifying protein localisation changes.
Also see https://www.wiki.ed.ac.uk/display/SWAIN/z-stacks+and+nucEstConv -- which suggests that swapping the order may improve things. However, this was based on the MATLAB version of the image segmentation & analysis pipeline.
## Implementation sketch
I will split this into two parts based on the two parts in 'Desired behaviour/advantages'.
**Part 1: computing `nuc_est_conv`**
I have identified 3 options. These options are not mutually exclusive -- the solution may well be a combination of all three.
_Option 1: Add `nuc_est_conv` as a default measure in extraction_
How: uncomment line 38 in https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/blob/master/extraction/core/functions/defaults.py, then run whole pipeline again on desired experiment.
Pros: easy, takes literally 3 seconds to implement
Cons: re-segmenting takes time and may not be desired if cell outlines have already been identified. We may also have to re-do this with multiple experiments, making the data output inconsistent between experiments.
Discussion: Do we want to re-integrate `nuc_est_conv` permanently into the pipeline? @amuoz commented the measure cd1b134e5a991539746c8621e8e55ced7b196ede, but no reason was given.
_Option 2: Define an `Extractor` object, adding `nuc_est_conv` as part of parameters, and re-extract images that have cell outlines already defined_
How: Specify parameters by defining a `ExtractorParameters` object (https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/blob/master/extraction/core/extractor.py), adding `nuc_est_conv` as a measure in addition to the existing defaults. Then define an `Extractor` object (https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/blob/master/extraction/core/extractor.py) with these parameters. Use this object, take the images and parameters as arguments, and re-do extraction (or perhaps only the `nuc_est_conv` part). New information should be written to the HDF5 file. Then, post-processing can be run again.
Pros: This is the ideal case. This should require the least resources and is the least redundant way to solve the problem. Plus, it takes advantage of `aliby`'s modularity and parameters-process paradigm.
Cons:
Discussion: Arin has attempted to do this, but has struggled to find the method within `Extractor` to achieve this. His attempt was based on https://git.ecdf.ed.ac.uk/swain-lab/aliby/skeletons/-/blob/master/notebooks/4.%20Re-postprocessing.ipynb, but apparently `PostProcessor` and `Extractor` objects are structured in quite different ways. Here is a sketch:
```
import h5py
from pathlib import Path
folder = Path("/home/jupyter-arin/data/23174_2022_03_25_flavin_htb2_glucose_limitation_hard_delft_04_02")
from aliby.pipeline import PipelineParameters, Pipeline
pipeline_params = PipelineParameters.default(
general={
"expt_id": 23174, # should match the experiment so that channels match
"distributed": 10, # doesn't matter
"server_info": {
"host": *****,
"username": *****,
"password": *****,
},
},
)
extractor_params_dict = pipeline_params.to_dict()['extraction']
extractor_params_dict['tree']['mCherry']['np_max'].update({'nuc_est_conv'})
from extraction.core.extractor import ExtractorParameters, Extractor
from pathos.multiprocessing import Pool
def extract_file(filepath):
try:
with h5py.File(filepath, "a") as f:
if "extraction" in f:
del f["/extraction"]
extractor = Extractor(
ExtractorParameters.from_dict(extractor_params_dict), filepath)
extractor.run()
print(filepath, " PASSED\n")
except Exception as e:
print(filepath, " FAILED\n")
print(e)
with Pool(1) as p:
results = p.map(
lambda x: extract_file(x), Path(folder).rglob("*.h5")
)
```
which currently fails.
_Option 3: Use `nuc_est_conv` function on its own and use that on images._
How: Import it from https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/blob/master/extraction/core/functions/custom/localisation.py
Pros:
Cons: Doesn't take advantage of how things are organised in `aliby`.
**Part 2: find max projection**
The original method should be implemented within `nuc_conv_3d` in https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/blob/master/extraction/core/functions/custom/localisation.py.
The alternative method should be:
Assuming that `nuc_est_conv` is computed separately for each z-stack, we just need to call `numpy.max` on the outputs from each.
Then the results from each method can be plotted and thus compared.Arin Wongprommoonarin.wongprommoon@ed.ac.ukArin Wongprommoonarin.wongprommoon@ed.ac.ukhttps://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/32Add data clean-up postprocesses to Signal2022-09-27T11:39:06+01:00Arin Wongprommoonarin.wongprommoon@ed.ac.ukAdd data clean-up postprocesses to Signal## Summary
Add data clean-up postprocesses to `Signal` object.
## Current behaviour/setbacks
We have post-processes that can function well as fundamental data clean-up processes. For example, these include smoothing/filtering (Savitz...## Summary
Add data clean-up postprocesses to `Signal` object.
## Current behaviour/setbacks
We have post-processes that can function well as fundamental data clean-up processes. For example, these include smoothing/filtering (Savitzky-Golay), detrending, normalising data (`scikit-learn`'s Standard Scaler). Users can invoke these after obtaining post-processed data, but they have to do so manually.
## Desired behaviour/advantages
As we anticipate that these clean-up processes are broadly desired before further analysis, the `Signal` object can include these additional processes.
## Implementation sketch
Within `agora/io/signal.py`, the `Signal` object contains an `apply_prepost()` method (line 62). Currently, it only applies the `picker` and `merger` processes by default; additional post-processes (e.g. `savgol`, `detrend`, `standardscaler`) can be added there.Arin Wongprommoonarin.wongprommoon@ed.ac.ukArin Wongprommoonarin.wongprommoon@ed.ac.uk2022-04-14https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/18NucEstConv3D speed2022-09-27T11:31:50+01:00Alán MuñozNucEstConv3D speed![image](/uploads/1c88fe502420247665d719f2f3d2a546/image.png)
NucEstConv performs pretty bad speed-wise, looks like scipy's fft function is the culprit. Maybe there is some optimisation to be done?
```python
#!/usr/bin/env python3
fro...![image](/uploads/1c88fe502420247665d719f2f3d2a546/image.png)
NucEstConv performs pretty bad speed-wise, looks like scipy's fft function is the culprit. Maybe there is some optimisation to be done?
```python
#!/usr/bin/env python3
from extraction.core.extractor import Extractor, ExtractorParameters
fpath = "/home/alan/Documents/dev/skeletons/scripts/data/19447_2020_11_18_downUpshift_2_0_2_glu_gcd2_gcd6_gcd7__02/gcd2_001.h5"
# import h5py
from agora.io.bridge import image_creds_from_h5, parameters_from_h5
from aliby.io.omero import Image
from aliby.tile.tiler import Tiler, TilerParameters
image_id, creds = image_creds_from_h5(fpath)
with Image(image_id, **creds) as image:
params = parameters_from_h5(fpath)
params["extraction"]["tree"]["Flavin"]["None"] = ["nuc_conv_3d"]
import cProfile
import pstats
ext = Extractor.from_tiler(
parameters=ExtractorParameters.from_dict(params["extraction"]),
store=fpath,
tiler=Tiler.from_hdf5(image, fpath, TilerParameters.from_dict(params["tiler"])),
)
pr = cProfile.Profile()
pr.enable()
tmp = ext.run(tps=[0])
pr.disable()
ps = pstats.Stats(pr)
ps.dump_stats("speed.prof")
```
Here is the profile, and the code to include nuc_conv3d:
[speed.prof](/uploads/0010461d75eb891f9e747067fa633401/speed.prof)https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/16Positions with low trap identification2022-09-27T11:31:43+01:00Alán MuñozPositions with low trap identificationHere are the experiment ids -> positions that seem to have trouble with trap identification
| experiment | position | no. traps |
| ------ | ------ | ------ |
| 19144 | mig1gfp_msn2_mcherry018 | 34|
| 19334 | ura8h360a_011 | 41 |
| ...Here are the experiment ids -> positions that seem to have trouble with trap identification
| experiment | position | no. traps |
| ------ | ------ | ------ |
| 19144 | mig1gfp_msn2_mcherry018 | 34|
| 19334 | ura8h360a_011 | 41 |
| 19334 | ura8h360a_009| 50 |
| 19334 | ura8h360a_012 | 56 |
| 19334 | ura8h360r_001 | 20 |
| 19334 | ura8_021 | 40 |
| 19334 | ura8_023 | 15 |
| 19970 | (All positions) | 15-35 |
| 19310 | pos012 | 38
19334 has positions with decent numbers, but it probably has to do with it being a continuation of a previous experiment (it resumed after the last one finished correctly).https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/28FEATURE: Binary classifier -- oscillatory vs non-oscillatory time series2022-10-01T14:23:40+01:00Arin Wongprommoonarin.wongprommoon@ed.ac.ukFEATURE: Binary classifier -- oscillatory vs non-oscillatory time series## Summary
(Describe the feature you're requesting)
## Current behaviour/setbacks
(Describe what is currently happening and why this is sub-optimal)
## Desired behaviour/advantages
(Describe the desired behaviour and what this will ...## Summary
(Describe the feature you're requesting)
## Current behaviour/setbacks
(Describe what is currently happening and why this is sub-optimal)
## Desired behaviour/advantages
(Describe the desired behaviour and what this will improve)
## Implementation sketch
- Pre-trained on BY4741 time series from experiment 20016?
- Featurisation: `catch22` (does it fit our paradigm to have a post-process depend on another post-process? If not, this feature could go into `skeletons`?)
- Model: likely RF, or SVM. Need to optimise hyperparameters first.
- Output: if SVM, probability of oscillatory using `predict_proba` -- this gives more info than just whether oscillatory or nothttps://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/14Argo object should have a max_date parameter2022-09-27T11:31:32+01:00Arin Wongprommoonarin.wongprommoon@ed.ac.ukArgo object should have a max_date parameter## Summary
Add a `max_date` parameter (in addition to the `min_date` parameter) to the `Argo` object.
## Current behaviour/setbacks
Defining an `Argo` object, e.g.
```
argo=Argo(**{"host": ".....",
"user": ".....",
"p...## Summary
Add a `max_date` parameter (in addition to the `min_date` parameter) to the `Argo` object.
## Current behaviour/setbacks
Defining an `Argo` object, e.g.
```
argo=Argo(**{"host": ".....",
"user": ".....",
"password": ".....",
}, min_date=(2020,1,1)
)
```
can take a `min_date` parameter to search for experiments performed after a certain date -- in this case, after 2020-01-01.
Currently it doesn't have a `max_date` parameter.
## Desired behaviour/advantages
A `max_date` parameter to get it to search for experiments before a certain date.
The advantages are obvious: helps it to narrow the search and makes it faster, especially with experiments performed by former members of the research group.
## Implementation sketch
(If you can, describe how you think this feature should be implemented)Arin Wongprommoonarin.wongprommoon@ed.ac.ukArin Wongprommoonarin.wongprommoon@ed.ac.ukhttps://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/9More descriptive name for nucEstConv2022-09-27T11:32:16+01:00swainlabMore descriptive name for nucEstConvIf we're going to be using it instead of the old `protein_localisation` we should call it `protein_localisation`. I'm open to other ideas though, but my arguments for renaming it are:
1. we're not using it only for nuclear localisation,...If we're going to be using it instead of the old `protein_localisation` we should call it `protein_localisation`. I'm open to other ideas though, but my arguments for renaming it are:
1. we're not using it only for nuclear localisation,
2. it's not necessary to specify it's an estimation, any measure of protein localisation is going to be an estimation,
3. the fact that it uses convolutions should probably be in the documentation rather than in the name.https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues/8Tasks2022-06-02T11:55:08+01:00swainlabTasks## Next tasks to replicate MATLAB's behaviour
### Core
- [ ] Implement functions
- [x] Base
- [x] Trap-wise
- [ ] Membrane
- [x] NucEstConv
- Moved to post-proc
- Growth rate calculation(s)
- Birth events
- [x] Fi...## Next tasks to replicate MATLAB's behaviour
### Core
- [ ] Implement functions
- [x] Base
- [x] Trap-wise
- [ ] Membrane
- [x] NucEstConv
- Moved to post-proc
- Growth rate calculation(s)
- Birth events
- [x] Filter by ntimepoints/fraction of timelapse
### Classes
- [x] Parameters class
- [x] Extraction class
### I/O
- [x] Connect input to a basic python structure
- [x] Fetch pos/trap/cell information from cell structure
- [x] Export into consensus structure
- [x] Export into dict of dataframes where (row: cell_uniq_id, col: tp)
### Testing
- [x] Obtain results equivalent for matlab analogue
- [ ] Write speed tests