Alejo J Nevado-Holgado
MSc, MSc, PhD
A.I. and bioinformatics applied to mental health
Bioinformatics and Artificial Intelligence
I am the lead of the AI team in the TNDR (https://www.psych.ox.ac.uk/research/dementia-research-group) laboratory in the Department of Psychiatry, formed by 10 excellent machine learners and bioinformaticians. Our focus is on the applications of machine learning and bioinformatics to mental health care. In addition, I also hold a position at the Big Data Institute, where we collaborate in the application of machine learning to genomics and target discovery. I am also consultant to a number of AI companies.
The main technologies that we apply and develop in our laboratory are bioinformatics, artificial intelligence and high performance computing, and we apply these to data coming from biotech laboratories (i.e. genomics, transcriptomics, proteomics and metabolomics from human samples and iPSC cells) and hospitals or GP practices (i.e. Electronic Health Records and cohorts of volunteer patients). In the case of biotech data, bioinformatics methods allow us to know which are the metabolic processes associated with neurological diseases, such that appropriate pharmaceutical targets and drugs can be developed. In the case of hospital and GP data, artificial intelligence and neural networks allow us to extract from the text notes written by doctors what are the diagnoses, medications, symptoms and medical test results of millions of patients, which in turn can be analysed to evaluate the most efficient treatment per patient (personalised medicine), or whether certain drugs are serendipitously ameliorating psychiatric conditions (drug repurposing). Often, we combine biotech and hospital/GP data to validate laboratory results with real world evidence, such that any target or treatments we propose increases its chances of succeeding in the final stages of clinical trials. In all cases, high performance computing is the software (e.g. concurrent programming, threading or CUDA) and hardware (e.g. our 40 GPUs computational cluster) of choice to perform all these calculations in hours rather than years.
In summary, we believe in the benefits that information technologies can bring (and are bringing) to health care and drug discovery, and we actively work towards implementing these methods in the lab and the clinic, and on developing the very computational methods that make this possible.
I did my PhD in the University of Bristol, Department of Computer Science, under the supervision of Dr Rafal Bogacz and Dr John Terry. During these very interesting years, I used mathematical modelling, signal analysis and machine learning to the study of the basal ganglia and Parkinson’s disease. With these techniques we investigated which anomalies were generating the patterns of neuronal activity recorded by experimental groups, like our collaborator Dr Peter Magill and his team.
After some experimental training in Cambridge, now I am again applying machine learning and bioinformatics to the study of neurodegeneration, but this time to investigate biomarkers and the metabolic network in these diseases. It is known that Alzheimer's and Parkinson's disease have a very long prodromal course, which means that some underlying cause gradually destroys brain tissue, not being the condition diagnosed until 20 years later, when most of the brain is lost and unrecoverable. Therefore, any technique aiming at stopping the advance of these diseases, needs first to be able to detect it in the prodromal stage. Traditional analysis approaches haven't been able to do so yet, although recent technological developments may change this luck.
For instance, a very large amount of medical and biological data has been produced during these decades of research, but this data is scattered across many institutes and hospitals, and its size makes it impossible to be analysed by a person in the classical way. We are aiming at first linking all these data together, and then thoroughly analysing it with machine learning and artificial intelligence approaches, which can make sense of data when it is beyond human interpretation due to size and complexity. We think this approach, which is proving of great success in high tech industry, has the best chances at detecting neurodegeneration in its prodromal stage, and helping us understand how to modify its course and avoid brain damage.
Virtual Brain Cloud (EU H2020)
In a large European consortium led by the Charité Universitätsmedizin Berlin, we are investigating how computational simulations and methods can be used to diagnose patients of Alzheimer's disease. The full consortium project started on 2019, is scheduled to last for 4 years, as is funded by 1.5 million Euro. Our role is on progressing on our biomarkers of Alzheimer's disease, such that these can be incorporated into the simulations designed by collaborators in the consortium.
Miocrobiome in depression (NIH U19)
In a large consortium led by colleagues at Duke University, we are investigating the role of microbiome in depression, anxiety and sleep perturbations. The full consortium project is scheduled for 5 years, starting in December 2019, and is funded by $27 million. Our role focuses on the use of neural networks applied to the multiomics datasets gathered by the consortium, as well as providing further proteomics cohorts.
Metabolomics in dementia (MOVE-AD)
This is another large consortium led by Duke University, where we are investigating the metabolomics of dementia, and its intersection with inflammation and cardiovascular disease. Our role in the consortium extends until 2021, with $110 thousands dedicated to our team. Our task is to bring machine learning and neural networks to the pool of methods used by the consortium.
In collaboration with industry, we are applying neural networks to genomics data to discover new potential pharmaceutical targets of Alzheimer's disease. The three projects are funded with £900 thousands, and currently scheduled to last until 2023.
Blood-brain axis of Alzheimer's disease (AMP-AD)
In collaboration with the AMP-AD consortium, we are investigating how the blood-brain axis of Alzheimer's reflects on the protein levels of 1200 patients. Our role in the consortium extends until 2020, with $330 thousands dedicated to our team. Our task is to apply bioinformatics and machine learning to the proteomic datasets collected from blood and postmortem brain samples, as well as to investigate potential biomarkers that could be built with this data.