Data-driven computational models of familial Alzheimer’s disease

[UPDATE]
The published version is available here:
Brain 141(5), awy050 (2018).

“The paper is a pleasure to read, as well as scientifically insightful.”
— Journal Editor

My latest paper on Alzheimer’s disease progression has been accepted in Brain. The preprint is on bioRxiv, available for free:

Data-driven models of dominantly-inherited Alzheimer’s disease progression
Neil Oxtoby, Alex(andra) Young, Dave Cash, Tammie Benzinger, Anne Fagan, John Morris, Randy Bateman, Nick Fox, Jon Schott, Danny Alexander
bioRxiv, 250654 (2018)

Familial AD (known more technically as “dominantly-inherited” AD or “autosomal dominant” AD) is very rare cause of dementia – about 1% of all AD. It’s caused by one of a family of genetic mutations inherited (50/50 chance) from a parent, and results in developing AD symptoms (memory loss, etc.), earlier than usual – in your 40s or 50s, rather than 60s or 70s.

Because this rare disease is dominantly inherited, it’s possible to identify people who carry one of the genetic mutations before symptoms appear. These people are usually recruited via their parents, after their parents have been diagnosed. This presymptomatic phase enables us to study familial AD progression before it’s too late, which is impractical for typical, non-familial AD (you’d need to observe many thousands of people, annually, over 10-20 years or more, and many of these wouldn’t develop AD). Further, during this pre symptomatic phase of familial AD it’s possible to estimate the number of years until the onset of symptoms in mutation carriers, called “EYO” (Estimated Years to Onset). This is because children often develop symptoms around the same age as their parents do: usually within about 5 years of the same age.

So, EYO represents a good, but not great, method/model for “staging” patients along the timeline of familial AD progression.

We wanted to see if data-driven disease progression modelling could do better.

In this paper, we analysed biomarker data including brain imaging data (MRI and PET), specific protein levels in spinal fluid, and scores on a cognitive test to build computational models of the sequence and timing of familial AD progression (specifically, event-based models and differential equation models). The data came from a global collection of volunteer participants including families affected by familial AD (parents and their adult children) in the DIAN dataset.

Our models do not use EYO (the current state of the art), and we predicted symptom onset more accurately than using EYO (within 1.3 years, compared to 5.5 years for EYO in our experiments).

Another win for computational, data-driven modelling of neurological diseases!

Next step: apply similar approaches to other diseases, and combine what we learn with the aim to produce a useful tool for identifying people at risk well before the disease has taken hold.

The paper is in production over at Brain and should be available soon.

Sequential disconnection of the brain in Alzheimer’s disease

My latest paper on Alzheimer’s disease progression is available in Frontiers in Neurology:

Data Driven Sequence of Changes to Anatomical Brain Connectivity in Sporadic Alzheimer’s Disease
Neil Oxtoby, Sara Garbarino, Nick Firth, Jason Warren, Jon Schott, Danny Alexander
Front. Neurol., 8, 580 (2017)

Alzheimer’s disease is thought to be a “disconnection syndrome”, where brain regions becomes increasingly disconnected due to neurodegeneration. No-one has examined the sequence of changes in the elderly brain’s anatomical connectivity over the course of a neurodegenerative disease.

Until now.

In this paper, I analysed brain imaging data (MRI) to build connectomes for healthy and diseased individuals from the public ADNI dataset, and summarised brain connectivity in health and disease using graph theory metrics.

These metrics were then plugged into our ever-reliable event-based model of disease progression (with an important tweak courtesy of Nick) in order to find the sequence of brain disconnections due to Alzheimer’s disease. The paper was published on 7 Nov 2017.

Imaging plus X

My work in the EuroPOND consortium is neatly summarised in our latest paper, where we review the emerging field of data-driven disease progression modelling. It’s open access, so anyone can download and read it for free from here:

Imaging plus X: multimodal models of neurodegenerative disease progression
Neil Oxtoby, Danny Alexander, for the EuroPOND Consortium
Current Opinion in Neurology 30, 371–379 (2017)

 

Technical: MRtrix ACT using GIF parcellation

Software: MRtrix 3 (0.3.15)
Pipeline: Anatomically Constrained Tractography (ACT)

Here in CMIC we analyse a lot of structural MR images using Jorge Cardoso’s Geodesic Information Flows (GIF) algorithm, which utilises the Neuromorphometrics parcellation. Jorge and colleagues currently offer a web service that will segment and parcellate your structural MRI using GIF: NiftyWeb.

I wanted to do some AC tractography and connectomics with MRtrix based on this tutorial, but using a GIF-based parcellation rather than FreeSurfer. Following the hints at the bottom of this ACT tutorial, I succeeded. Keep reading to find out how.

Making MRtrix ACT work using GIF segmentation/parcellation, rather than FreeSurfer

Definitions:
$MRTRIX – path to your mrtrix3 installation
$GIFDB – path to relevant GIF-specific files for ACT (if you’re lucky enough to have the GIF source code, you can generate these yourself, but I provide them below)

1. Calling 5ttgen to generate 5TT.mif

This creates a Five Tissue Type file (Cortical/Subcortical GM; WM; CSF; Pathological tissue). When using the freesurfer argument, this refers to the following script: $MRTRIX/scripts/src/_5ttgen/freesurfer.py, which relies upon configuration files in the $MRTRIX/scripts/data folder:
FreeSurferACT.txt
FreeSurferACT_sgm_amyg_hipp.txt

How to modify this process for GIF
  • I wrote a MATLAB/Octave script GIFColourLUT_generator.m (not supplied here) to create the necessary config files:
    $MRTRIX/scripts/data/GIF2ACT.txt
    $MRTRIX/scripts/data/GIF2ACT_sgm_amyg_hipp.txt
    $GIFDB/GIFcolourLUT.txt
  • I manually edited $MRTRIX/scripts/src/_5ttgen/freesurfer.py and saved it as
    $MRTRIX/scripts/src/_5ttgen/gif.py
  • I added export GIFDB_HOME=$GIFDB to my bash profile (reminiscent of FREESURFER_HOME)
2. Connectome Lookup Table (LUT)

This step simply renumbers the ROIs, such that the numbers in the image no longer correspond to entries in the colour lookup table (the Neuromorphometrics ROI numbers), but to rows and columns of the connectome.

  • Create GIF version of the connectome LUT as:
    $MRTRIX/src/connectome/tables/gif_default.txt (mrtix v0.3.15)
    $MRTRIX/src/connectome/config/gif_default.txt (mrtix v0.3.14)
3. Rerun the HCP connectome tutorial using GIF

I leave this as an exercise for you.

You’ll need a GIF-processed T1 image, so you should submit the structural MR image (T1w_acpc_dc_restore_brain_GIF_Parcellation.nii.gz) to the NiftyWeb GIF parcellation service and save the resulting parcellated file in an appropriate location.

Differences if you want to use non-HCP data, such as ADNI

ADNI is single-shell diffusion data, so you need to modify the pipeline at the appropriate points. I have tested this out on processed data from ADNI (contact me if you don’t know how to download processed images from LONI). I found that the diffusion and structural images were misaligned by a linear translation, so I shifted them to the same origin using MRtrix:

mrinfo -transform ${T1_}.mif | cat >> ${T1_}_transform.txt
mrtransform -replace ${T1_}_transform.txt ${DTI_}.mif ${DTI_}_trans.mif
mrview ${T1_}.mif -overlay.load ${DTI_}_trans.mif -overlay.opacity 0.3

Decoding Alzheimer’s and Parkinson’s disease

Following a press-release today (23 March 2016), Dr. Laura Phipps from Alzheimer’s Research UK wrote a blog post about my research into Alzheimer’s disease and Parkinson’s disease. Check it out:
www.dementiablog.org/using-computers-to-decode-alzheimers

The official project description can be found here.

For more about my research, drop me a line.

Event-based model of Alzheimer’s disease

Our POND team at UCL have modelled the changes in Alzheimer’s disease, confirming our current understanding of this disease, and providing a tool for diagnosis and prognosis. You can read about it for free in the journal Brain here (open access). Title and abstract below.


A data-driven model of biomarker changes in sporadic Alzheimer’s disease

Alexandra Young, Neil Oxtoby, Pankaj Daga, David Cash, ADNI, Nick Fox, Sebastien Ourselin, Jonathan Schott, and Daniel Alexander

We demonstrate the use of a probabilistic generative model to explore the biomarker changes occurring as Alzheimer’s disease develops and progresses. We enhanced the recently introduced event-based model for use with a multi-modal sporadic disease data set. This allows us to determine the sequence in which Alzheimer’s disease biomarkers become abnormal without reliance on a priori clinical diagnostic information or explicit biomarker cut points. The model also characterises uncertainty in the ordering and provides a natural patient staging system. Two hundred and eighty-five subjects (92 cognitively normal, 129 mild cognitive impairment, 64 Alzheimer’s disease) were selected from the Alzheimer’s Disease Neuroimaging Initiative with measurements of 14 Alzheimer’s disease-related biomarkers including cerebrospinal fluid proteins, regional magnetic resonance imaging brain volume and rates of atrophy measures, and cognitive test scores. We used the event-based model to determine the sequence of biomarker abnormality and its uncertainty in various population subgroups. We used patient stages assigned by the event-based model to discriminate cognitively normal subjects from those with Alzheimer’s disease, and predict conversion from mild cognitive impairment to Alzheimer’s disease and cognitively normal to mild cognitive impairment. The model predicts that cerebrospinal fluid levels become abnormal first, followed by rates of atrophy, then cognitive test scores, and finally regional brain volumes. In amyloid-positive (cerebrospinal fluid amyloid-β1–42 < 192 pg/ml) or APOE-positive (one or more APOE4 alleles) subjects, the model predicts with high confidence that the cerebrospinal fluid biomarkers become abnormal in a distinct sequence: amyloid-β1–42, phosphorylated tau, total tau. However, in the broader population total tau and phosphorylated tau are found to be earlier cerebrospinal fluid markers than amyloid-β1–42, albeit with more uncertainty. The model’s staging system strongly separates cognitively normal and Alzheimer’s disease subjects (maximum classification accuracy of 99%), and predicts conversion from mild cognitive impairment to Alzheimer’s disease (maximum balanced accuracy of 77% over 3 years), and from cognitively normal to mild cognitive impairment (maximum balanced accuracy of 76% over 5 years). By fitting Cox proportional hazards models, we find that baseline model stage is a significant risk factor for conversion from both mild cognitive impairment to Alzheimer’s disease (P = 2.06 × 10−7) and cognitively normal to mild cognitive impairment (P = 0.033). The data-driven model we describe supports hypothetical models of biomarker ordering in amyloid-positive and APOE-positive subjects, but suggests that biomarker ordering in the wider population may diverge from this sequence. The model provides useful disease staging information across the full spectrum of disease progression, from cognitively normal to mild cognitive impairment to Alzheimer’s disease. This approach has broad application across neurodegenerative disease, providing insights into disease biology, as well as staging and prognostication.

Ideal Gas Law works for soft crystals

Working with folk from my old stomping ground at the University of Liverpool, we discovered in the laboratory that a two-dimensional dusty plasma can be adequately described using the ideal gas law – even when shock waves are excited to melt the dust crystal.

The title and abstract are below, but you can read all about it in Physical Review Letters here (or for free on the arXiv here).


Ideal Gas Behavior of a Strongly Coupled Complex (Dusty) Plasma

N.P. Oxtoby, E.J. Griffith, C. Durniak, J.F. Ralph, and D. Samsonov

In a laboratory, a two-dimensional complex (dusty) plasma consists of a low-density ionized gas containing a confined suspension of Yukawa-coupled plastic microspheres. For an initial crystal-like form, we report ideal gas behaviour in this strongly coupled system during shock-wave experiments. This evidence supports the use of the ideal gas law as the equation of state for soft crystals such as those formed by dusty plasmas.

UCL

I have started my new role at University College London (UCL). I am working on computational models of neurodegenerative disease progression. Read more here.

APS March Meeting 2012

I will present some results on dusty plasmas at this year’s American Physical Society March Meeting in Boston, USA.  My talk is scheduled for the High Pressure: Experiment session on Thursday March 1.  I’ll be talking about combining Rankine-Hugoniot shock relations and target tracking to derive an equation of state for a dusty plasma.  There is a brief video here.

Tracking shocked dust

Our group’s latest paper has been accepted for publication published in Physics of Plasmas:

“Tracking shocked dust: state estimation for a complex plasma during a shock wave”
Neil P. Oxtoby, Jason F. Ralph, Céline Durniak and Dmitry Samsonov
(read the abstract and download from Physics of Plasmas or the arXiv.)

Overview:
The motion of “dust” particles in a complex plasma are obtained by computer-processing frames of a high-speed video. This gives us particle positions as a function of time. An individual particle’s velocity is usually obtained from consecutive positions – a technique known as particle tracking velocimetry (PTV). This yields an estimate of average velocity between frames with precision limited by the precision of the particle’s positions. In particular, pixel locking will propagate into velocity estimated using PTV.
We include a Bayesian inference step in the tracking procedure – using an extended Kalman filter to predict the particle position, velocity and acceleration. The prediction is based on a priori knowledge of the dust dynamics. We show that Prediction + Measurement (in a weighted sum) = significantly higher precision than PTV.
We also go further to use an interacting multiple model (IMM) filter that handles the shock wave excitation nicely – see the paper for details (quite technical).

The bottom line for physics:
Target tracking (state estimation) can significantly improve the precision of velocity estimates for the dust. This is of major importance for calculating condensed-matter-like quantities such as pressure/stress, kinetic temperature, and dynamic viscosity – to name a few. We calculated a pressure-volume diagram from our results, showing excellent qualitative agreement between experiment and simulation.