Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Alejo J Nevado-Holgado

MSc, MSc, PhD

Associate Professor

A.I. and bioinformatics applied to mental health

Bioinformatics and Artificial Intelligence

I am an Associate Professor of the Department of Psychiatry and the Big Data Institute, and part of Dementia Research Oxford. Prof Noel Buckley and I are the head of the Computational and Molecular Neuroscience laboratory, an interdisciplinary team of 20 AI, biochemistry, and bioinformatic scientists. Our aim is to better understand neurological disorders by developing and applying state-of-the-art iPSC, AI, and bioinformatic techniques. Our focus is on the applications of machine learning and bioinformatics to mental health care. We are:

- Laura Winchester (Lead Scientist), bioinformatics

- Andrey Kormilitzin (Lead Scientist), AI

- William Sproviero, genetics

- Qiang Liu, AI microscopy

- Upamanyu Ghose, AI genetics

- Angeliki Papathanasiou, AI genetics

- Niall Taylor, AI NLP

- Taiyu Zhu, AI genetics

- Kholod Alhasani, AI imaging

- Helen Rowland, iPSC models

- Shuhan Li, iPSC models

- Itzel Aguilar Gonzalez, iPSC models

- Nathaniel Gould, iPSC models

- Finnian Gavin (MSc student), AI bioinformatics

- Joshua Sammet (MSc student ETH Zurich), AI imaging

- Max Sterling (MSc student), AI microscopy

- To be appointed (recruiting now!), AI NLP

- To be appointed (recruiting now!), AI NLP & bioinformatics

I also co-supervise from other laboratories:

- Brittany Ulm (with Cornelia van Duijn), bioinformatics

- Sihao Xiao (with Cornelia van Duijn), bioinformatics

- Hanjun Zhao (with Shisong Jiang), bioinformatics

- Hope Han (with Zameel Cader), bioinformatics

I supervise the AI work of our laboratory, which consists of neural networks (NNs) applied to genetics, transcriptomics, proteomics, iPSC cells microscopy, brain imaging, and text from Electronic Health Records. We apply these to data coming from biotech laboratories (i.e. genomics, transcriptomics, proteomics and metabolomics from human samples and iPSC cells) and hospitals or GP practices (i.e. Electronic Health Records and cohorts of volunteer patients). In the case of biotech data, bioinformatics methods allow us to know which are the metabolic processes associated with neurological diseases, such that appropriate pharmaceutical targets and drugs can be developed. In the case of hospital and GP data, artificial intelligence and neural networks allow us to extract from the text notes written by doctors what are the diagnoses, medications, symptoms and medical test results of millions of patients, which in turn can be analysed to evaluate the most efficient treatment per patient (personalised medicine), or whether certain drugs are serendipitously ameliorating psychiatric conditions (drug repurposing). Often, we combine biotech and hospital/GP data to validate laboratory results with real world evidence, such that any target or treatments we propose increases its chances of succeeding in the final stages of clinical trials. In all cases, high performance computing is the software (e.g. concurrent programming, threading or CUDA) and hardware (e.g. our 70 GPUs computational cluster plus cloud facilities) of choice to perform all these calculations in hours rather than years.

In summary, we believe in the benefits that information technologies can bring (and are bringing) to health care and drug discovery, and we actively work towards implementing these methods in the lab and the clinic, and on developing the very computational methods that make this possible.


I did my PhD in the University of Bristol, Department of Computer Science, under the supervision of Dr Rafal Bogacz and Dr John Terry. During these very interesting years, I used mathematical modelling, signal analysis, and machine learning to the study of the basal ganglia and Parkinson’s disease. With these techniques, we investigated which anomalies were generating the patterns of neuronal activity recorded by experimental groups, like our collaborator Dr Peter Magill and his team.

After some experimental training in Cambridge, now I am again applying machine learning and bioinformatics to the study of neurodegeneration, but this time to investigate biomarkers and the metabolic network in these diseases. It is known that Alzheimer's and Parkinson's disease have a very long prodromal course, which means that some underlying cause gradually destroys brain tissue, not being the condition diagnosed until 20 years later, when most of the brain is lost and unrecoverable. Therefore, any technique aiming at stopping the advance of these diseases needs first to be able to detect it in the prodromal stage. Traditional analysis approaches haven't been able to do so yet, although recent technological developments may change this luck.

For instance, a very large amount of medical and biological data has been produced during these decades of research, but this data is scattered across many institutes and hospitals, and its size makes it impossible to be analyzed by a person in a classical way. We are aiming at first linking all these data together, and then thoroughly analyzing it with machine learning and artificial intelligence approaches, which can make sense of data when it is beyond human interpretation due to size and complexity. We think this approach, which is proving of great success in the high-tech industry, has the best chances at detecting neurodegeneration in its prodromal stage and helping us understand how to modify its course and avoid brain damage.


Virtual Brain Cloud (EU H2020)

In a large European consortium led by the Charité Universitätsmedizin Berlin, we are investigating how computational simulations and methods can be used to diagnose patients of Alzheimer's disease. The full consortium project started on 2019, is scheduled to last for 4 years, as is funded by 1.5 million Euro. Our role is on progressing on our biomarkers of Alzheimer's disease, such that these can be incorporated into the simulations designed by collaborators in the consortium.

Miocrobiome in depression (NIH U19)

In a large consortium led by colleagues at Duke University, we are investigating the role of microbiome in depression, anxiety and sleep perturbations. The full consortium project is scheduled for 5 years, starting in December 2019, and is funded by $27 million. Our role focuses on the use of neural networks applied to the multiomics datasets gathered by the consortium, as well as providing further proteomics cohorts.

Metabolomics in dementia (MOVE-AD)

This is another large consortium led by Duke University, where we are investigating the metabolomics of dementia, and its intersection with inflammation and cardiovascular disease. Our role in the consortium extends until 2021, with $110 thousands dedicated to our team. Our task is to bring machine learning and neural networks to the pool of methods used by the consortium.

Industry projects

In collaboration with industry, we are applying neural networks to genomics data to discover new potential pharmaceutical targets of Alzheimer's disease. The three projects are funded with £900 thousands, and currently scheduled to last until 2023.

Blood-brain axis of Alzheimer's disease (AMP-AD)

In collaboration with the AMP-AD consortium, we are investigating how the blood-brain axis of Alzheimer's reflects on the protein levels of 1200 patients. Our role in the consortium extends until 2020, with $330 thousands dedicated to our team. Our task is to apply bioinformatics and machine learning to the proteomic datasets collected from blood and postmortem brain samples, as well as to investigate potential biomarkers that could be built with this data.