diff --git a/README.md b/README.md
index b524529324b5320ef2cf3979f53cf014cf8eabe7..03019d1a4463cc9367fa9050229eb7ee038d0637 100644
--- a/README.md
+++ b/README.md
@@ -4,15 +4,114 @@
 [![readthedocs](https://readthedocs.org/projects/aliby/badge/?version=latest)](https://aliby.readthedocs.io/en/latest)
 [![pipeline status](https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/badges/master/pipeline.svg)](https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/pipelines)
 
-The core classes and methods for the python microfluidics, microscopy, data analysis and semi-autopmated reporting.
+The core classes and methods for the python microfluidics, microscopy, data analysis and reporting.
 
-## Documentation
+### Installation
+See [INSTALL.md](./INSTALL.md) for installation instructions.
 
-Documentation is hosted on [readthedocs](https://aliby.readthedocs.io/en/latest)
+## Quickstart Documentation
+### Setting up a server
+For testing and development, the easiest way to set up an OMERO server is by
+using Docker images. 
+[The software carpentry](https://software-carpentry.org/) and the [Open
+ Microscopy Environment](https://www.openmicroscopy.org), have provided
+[instructions](https://ome.github.io/training-docker/) to do this.
 
-## Installation
+The `docker-compose.yml` file can be used to create an OMERO server with an
+accompanying PostgreSQL database, and an OMERO web server.
+It is described in detail 
+[here](https://ome.github.io/training-docker/12-dockercompose/).
 
-See [INSTALL.md](./docs/INSTALL.md)
+Our version of the `docker-compose.yml` has been adapted from the above to
+use version 5.6 of OMERO.
 
-## Contributing
-See [CONTRIBUTING.md](./docs/CONTRIBUTING.md)
+To start these containers (in background):
+```shell script
+cd pipeline-core
+docker-compose up -d
+```
+Omit the `-d` to run in foreground.
+
+To stop them, in the same directory, run:
+```shell script
+docker-compose stop
+```
+
+### Raw data access
+
+ ```python
+from aliby.io.dataset import Dataset
+from aliby.io.image import Image
+
+server_info= {
+            "host": "host_address",
+            "username": "user",
+            "password": "xxxxxx"}
+expt_id = XXXX
+tps = [0, 1] # Subset of positions to get.
+
+with Dataset(expt_id, **server_info) as conn:
+    image_ids = conn.get_images()
+
+#To get the first position
+with Image(list(image_ids.values())[0], **server_info) as image:
+    dimg = image.data
+    imgs = dimg[tps, image.metadata["channels"].index("Brightfield"), 2, ...].compute()
+    # tps timepoints, Brightfield channel, z=2, all x,y
+```
+ 
+### Tiling the raw data
+
+A `Tiler` object performs trap registration. It is built in different ways, the easiest one is using an image and a the default parameters set.
+
+```python
+from aliby.tile.tiler import Tiler, TilerParameters
+with Image(list(image_ids.values())[0], **server_info) as image:
+    tiler = Tiler.from_image(image, TilerParameters.default())
+```
+
+The initialisation should take a few seconds, as it needs to align the images
+in time. 
+
+It fetches the metadata from the Image object, and uses the TilerParameters values (all Processes in aliby depend on an associated Parameters class, which is in essence a dictionary turned into a class.)
+
+#### Get a timelapse for a given trap
+TODO: Update this
+```python
+channels = [0] #Get only the first channel, this is also the default
+z = [0, 1, 2, 3, 4] #Get all z-positions
+trap_id = 0
+tile_size = 117
+
+# Get a timelapse of the trap
+# The default trap size is 96 by 96
+# The trap is in the center of the image, except for edge cases
+# The output has shape (C, T, X, Y, Z), so in this example: (1, T, 96, 96, 5)
+timelapse = seg_expt.get_trap_timelapse(trap_id, tile_size=tile_size, 
+                                        channels=channels, z=z)
+```
+
+This can take several seconds at the moment.
+For a speed-up: take fewer z-positions if you can.
+
+If you're not sure what indices to use:
+```python
+seg_expt.channels # Get a list of channels
+channel = 'Brightfield'
+ch_id = seg_expt.get_channel_index(channel)
+
+n_traps = seg_expt.n_traps # Get the number of traps 
+```
+
+#### Get the traps for a given time point
+Alternatively, if you want to get all the traps at a given timepoint:
+
+```python
+timepoint = 0
+seg_expt.get_traps_timepoints(timepoint, tile_size=96, channels=None, 
+                                z=[0,1,2,3,4])
+```
+
+
+### Contributing
+See [CONTRIBUTING.md](./INSTALL.md) for installation instructions.
diff --git a/docs/CONTRIBUTING.md b/docs/CONTRIBUTING.md
deleted file mode 100644
index cafe30b7a31e792e06b4641bbcc4ada88797c3fb..0000000000000000000000000000000000000000
--- a/docs/CONTRIBUTING.md
+++ /dev/null
@@ -1,31 +0,0 @@
-## Contributing
-
-We focus our work on python 3.7 due to the current neural network being developed on tensorflow 1. In the near future we will migrate the networ to pytorch to support more recent versions of all packages.
-
-### Issues
-All issues are managed within the gitlab [ repository ](https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/issues), if you don't have an account on the University of Edinburgh's gitlab instance and would like to submit issues please get in touch with [Prof. Peter Swain](mailto:peter.swain@ed.ac.uk ).
-
-### Branching
-* master: very sparingly and only for changes that need to be made in both
- versions as I will be merging changes from master into the development
- branches frequently
- 
-Branching cheat-sheet:
-```git
-git branch my_branch # Create a new branch called branch_name from master
-git branch my_branch another_branch #Branch from another_branch, not master
-git checkout -b my_branch # Create my_branch and switch to it
-
-# Merge changes from master into your branch
-git pull #get any remote changes in master
-git checkout my_branch
-git merge master
-
-# Merge changes from your branch into another branch
-git checkout another_branch
-git merge my_branch #check the doc for --no-ff option, you might want to use it
-```
-
-### Data aggregation
-
-ALIBY has been tested by a few research groups, but we welcome new data sources for the models and pipeline to be as general as possible. Please get in touch with [ us ](mailto:peter.swain@ed.ac.uk ) if you are interested in testing it on your data.
diff --git a/docs/README.md b/docs/README.md
deleted file mode 100644
index f4a119a2c3a0d7d81bd273e6bec2038879961540..0000000000000000000000000000000000000000
--- a/docs/README.md
+++ /dev/null
@@ -1,148 +0,0 @@
-# ALIBY (Analyser of Live-cell Imaging for Budding Yeast)
-
-[![PyPI version](https://badge.fury.io/py/aliby.svg)](https://badge.fury.io/py/aliby)
-[![readthedocs](https://readthedocs.org/projects/aliby/badge/?version=latest)](https://aliby.readthedocs.io/en/latest)
-[![pipeline status](https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/badges/master/pipeline.svg)](https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/pipelines)
-
-The core classes and methods for the python microfluidics, microscopy, data analysis and reporting.
-
-### Installation
-See [INSTALL.md](./INSTALL.md) for installation instructions.
-
-## Quickstart Documentation
-### Setting up a server
-For testing and development, the easiest way to set up an OMERO server is by
-using Docker images. 
-[The software carpentry](https://software-carpentry.org/) and the [Open
- Microscopy Environment](https://www.openmicroscopy.org), have provided
-[instructions](https://ome.github.io/training-docker/) to do this.
-
-The `docker-compose.yml` file can be used to create an OMERO server with an
-accompanying PostgreSQL database, and an OMERO web server.
-It is described in detail 
-[here](https://ome.github.io/training-docker/12-dockercompose/).
-
-Our version of the `docker-compose.yml` has been adapted from the above to
-use version 5.6 of OMERO.
-
-To start these containers (in background):
-```shell script
-cd pipeline-core
-docker-compose up -d
-```
-Omit the `-d` to run in foreground.
-
-To stop them, in the same directory, run:
-```shell script
-docker-compose stop
-```
-
-### Raw data access
-
- ```python
-from aliby.io.dataset import Dataset
-from aliby.io.image import Image
-
-server_info= {
-            "host": "host_address",
-            "username": "user",
-            "password": "xxxxxx"}
-expt_id = XXXX
-tps = [0, 1] # Subset of positions to get.
-
-with Dataset(expt_id, **server_info) as conn:
-    image_ids = conn.get_images()
-
-#To get the first position
-with Image(list(image_ids.values())[0], **server_info) as image:
-    dimg = image.data
-    imgs = dimg[tps, image.metadata["channels"].index("Brightfield"), 2, ...].compute()
-    # tps timepoints, Brightfield channel, z=2, all x,y
-```
- 
-### Tiling the raw data
-
-A `Tiler` object performs trap registration. It is built in different ways, the easiest one is using an image and a the default parameters set.
-
-```python
-from aliby.tile.tiler import Tiler, TilerParameters
-with Image(list(image_ids.values())[0], **server_info) as image:
-    tiler = Tiler.from_image(image, TilerParameters.default())
-```
-
-The initialisation should take a few seconds, as it needs to align the images
-in time. 
-
-It fetches the metadata from the Image object, and uses the TilerParameters values (all Processes in aliby depend on an associated Parameters class, which is in essence a dictionary turned into a class.)
-
-#### Get a timelapse for a given trap
-TODO: Update this
-```python
-channels = [0] #Get only the first channel, this is also the default
-z = [0, 1, 2, 3, 4] #Get all z-positions
-trap_id = 0
-tile_size = 117
-
-# Get a timelapse of the trap
-# The default trap size is 96 by 96
-# The trap is in the center of the image, except for edge cases
-# The output has shape (C, T, X, Y, Z), so in this example: (1, T, 96, 96, 5)
-timelapse = seg_expt.get_trap_timelapse(trap_id, tile_size=tile_size, 
-                                        channels=channels, z=z)
-```
-
-This can take several seconds at the moment.
-For a speed-up: take fewer z-positions if you can.
-
-If you're not sure what indices to use:
-```python
-seg_expt.channels # Get a list of channels
-channel = 'Brightfield'
-ch_id = seg_expt.get_channel_index(channel)
-
-n_traps = seg_expt.n_traps # Get the number of traps 
-```
-
-#### Get the traps for a given time point
-Alternatively, if you want to get all the traps at a given timepoint:
-
-```python
-timepoint = 0
-seg_expt.get_traps_timepoints(timepoint, tile_size=96, channels=None, 
-                                z=[0,1,2,3,4])
-```
-
-## Development guidelines
-In order to separate the python2, python3, and "currently working" versions 
-(\#socialdistancing) of the pipeline, please use the branches:
-* python2.7: for any development on the 2 version
-* python3.6-dev: for any added features for the python3 version
-* master: very sparingly and only for changes that need to be made in both
- versions as I will be merging changes from master into the development
- branches frequently
-    * Ideally for adding features into any branch, espeically master, create
-     a new branch first, then create a pull request (from within Gitlab) before 
-     merging it back so we can check each others' code. This is just to make
-     sure that we can always use the code that is in the master branch without
-     any issues.
- 
-Branching cheat-sheet:
-```git
-git branch my_branch # Create a new branch called branch_name from master
-git branch my_branch another_branch #Branch from another_branch, not master
-git checkout -b my_branch # Create my_branch and switch to it
-
-# Merge changes from master into your branch
-git pull #get any remote changes in master
-git checkout my_branch
-git merge master
-
-# Merge changes from your branch into another branch
-git checkout another_branch
-git merge my_branch #check the doc for --no-ff option, you might want to use it
-```
-
-## TODO
-### Tests
-* test full pipeline with OMERO experiment (no download.)
-
diff --git a/docs/source/CONTRIBUTING.md b/docs/source/CONTRIBUTING.md
new file mode 120000
index 0000000000000000000000000000000000000000..44fcc63439371c8c829df00eec6aedbdc4d0e4cd
--- /dev/null
+++ b/docs/source/CONTRIBUTING.md
@@ -0,0 +1 @@
+../CONTRIBUTING.md
\ No newline at end of file
diff --git a/docs/INSTALL.md b/docs/source/INSTALL.md
similarity index 100%
rename from docs/INSTALL.md
rename to docs/source/INSTALL.md
diff --git a/docs/source/README.md b/docs/source/README.md
new file mode 120000
index 0000000000000000000000000000000000000000..32d46ee883b58d6a383eed06eb98f33aa6530ded
--- /dev/null
+++ b/docs/source/README.md
@@ -0,0 +1 @@
+../README.md
\ No newline at end of file
diff --git a/docs/source/index.rst b/docs/source/index.rst
index 9c12c074bb761b237ef772bfd51be7bcfe403ddd..c96dc21452a780a4d258a23a123c3799c0701b00 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -6,10 +6,15 @@
 Welcome to aliby's documentation!
 =================================
 
+Summary
+=======
+
 .. autosummary::
    :toctree: _autosummary
    :template: custom-module-template.rst
    :recursive:
 
+   source/README.md
+   source/CONTRIBUTING.md
    aliby
    extraction