New ‘AutoMorph’ tool allows researchers to analyse retinal photographs automatically

In a paper published in July 2022, NIHR Moorfields BRC-supported PhD student Yukun Zhou describes how a computer programme called ‘Automorph’, developed by him and his team, can automatically analyse photographs of the back of the eye. This is a significant development in a rapidly expanding area of medical research known as ‘oculomics’, where A.I. (artificial intelligence) is used to analyse high-resolution eye images to reveal important signs not just about the eyes, but about the health of the whole body*.

*For a recent paper about this, see:Siegfried Wagner et al: ‘Insights into Systemic Disease through Retinal Imaging-Based Oculomics’, TranlVisSciTechnol., February 2020
AutoMorph summary: Fully automated pipeline; Image pre-processing; Image quality grading; Artery/vein/optic disc segmentation; Clinically-relevant features; Open access

The key features of AutoMorph

Yukun Zhou

Yukun Zhou – a student at UCL’s Centre for Medical Image Computing and Moorfields Eye Hospital NHS Foundation Trust – was the lead author for the paper, entitled ‘AutoMorph: Automated Retinal Vascular Morphology Quantification Via a Deep Learning Pipeline’. It was published in the ARVO Translational Vision Science and Technology journal in July 2022. AutoMorph is one of a number of ongoing projects harnessing the potential of A.I. to improve healthcare led by NIHR Moorfields BRC researchers.

We spoke to Yukun about his research and what it means for the future of eye health.

What prompted the development of AutoMorph?

Due to the transparent nature of eye tissue, the back of the eye (the retina) is the only place in the body where living blood vessels (known as ‘vasculature’) and nerves can easily be studied without invasive or potentially harmful procedures. A simple technique called ‘fundus photography’ can capture the vasculature of the retina in a split second.

Fundus photographs are a part of routine eye care, and are taken regularly at opticians and in hospital eye clinics to check for a number of common eye conditions, such as diabetic retinopathy, age-related macular degeneration (AMD), macular oedema and retinal detachment. However, along with other types of high-resolution image, such as optical coherence tomography (OCT), fundus photographs can also give us clues about our overall health, including diseases affecting the whole body – so-called ‘systemic diseases’. This includes common conditions such as high blood pressure, high cholesterol, diabetes, risk of stroke, heart attack and dementia.

Interpreting these photographs – a process known as ‘grading’ – normally requires highly trained individuals and can be very time-consuming. Yukun set out to create a computer model that could perform this process automatically, and in a fraction of the time. It takes roughly 20 seconds to analyse one image.

Diagram showing how analysing data from individual eye scans will eventually bring benefit back to patients: Images from eye screening leads to automatic feature analysis, which then leads to risk prediction for individuals.

AutoMorph analyses retinal vascular features for ophthalmic and systemic disease monitoring, including risk prediction of cardiovascular and neurodegenerative diseases. Individuals at high risk of ocular and systemic diseases (highlighted in red) are identified from a large cohort.

What can these measurements tell us?

The science of using the eye to make wider assessments about a person’s health dates back at least a thousand years, when it was first recorded in Arab texts*. More recently, it has taken a huge leap forward thanks to advances in imaging techniques and computing. Researchers are increasingly using a process called ‘machine learning’, where computers analyse large numbers of images very rapidly, comparing tiny biological changes in the back of the eye and linking them to someone’s overall health record. As a result, researchers are starting to detect things that may not be visible to humans at all. The potential for individual patients is that doctors will be able to predict and monitor a number of serious health conditions using a simple eye-scan or photograph.

This rapidly developing area of medicine is known as ‘oculomics’ – a word which combines ‘oculus’ (the Latin word for ‘eye’) with the suffix ‘-omics’ (which refers to the use of large-scale data to understand something big). In short, oculomics is the science of combining eye and health data from lots of people with machine learning, in order to better understand the overall health of the body.

*For example, in the writings of medieval oculist Ali ibn Isa al-Kahhal, around 1010 CE

What exactly does AutoMorph do?

AutoMorph automatically analyses the photographs and produces a series of measurements of the blood vessels at the back of the eye.

The results of manually graded images are prone to variability between human graders, whereas AutoMorph provides consistent and reliable results.

Diagram showing the four stages of the AutoMorph 'pipeline' process: Pre-processing, Image quality grading (deep learning), Anatomical segmentation (deep learning), Feature measurement

A diagram of the AutoMorph pipeline, starting with a colour fundus photograph and ending with the output of vascular morphology features.

Why is AutoMorph called a ‘pipeline’?

AutoMorph is referred to as a ‘pipeline’ because it consists of a series of computer processes that are performed on batch photographs in a set order. There are four processes – or stages:

  • Stage 1: Pre-processing of images
    For the programme to be able to process the photographs, they all need to have the same square proportions. So, the first step is to ensure that rectangular images are either cropped or ‘padded’. The system assesses the image and crops it if there is unnecessary background, or, if the image is already closely cropped, adds ‘padding’ to make it square.

  • Stage 2: Image quality grading
    AutoMorph accesses image quality and filters the ones that are not high enough quality to be accurately ‘graded’. Each image is classed as ‘good’, ‘usable’ or ‘reject quality’. Depending on the desired output, the system can either use only ‘good’ images, or can both ‘good’ and ‘usable’ images. Images that are reject quality are ignored, which improves the reliability of the following processes.

  • Stage 3: Anatomical segmentation
    This is where the computer isolates particular areas of interest and separates them from other areas of the photograph that are not required. For example, it can separate the optic nerve or blood vessels from the background, or filter out ‘noise’, which is the non-relevant parts of the image. The result is ‘segmentation maps’, which are images used for measuring vascular morphology features.

  • Stage 4: Vascular morphology feature measurement
    The ‘morphology’ of an object refers to different aspects of its shape – in this case, the shape of the blood vessels is measured. This final process is the key to being able to diagnose a range of health conditions. Several measurements of the blood vessels are taken, including ‘tortuosity’, ‘fractal dimension’, ‘density’ and ‘calibre’, as well as ‘cup-to-disc ratio’:

      • ‘Tortuosity’ is the presence of abnormal twists and turns in the blood vessels. It is associated with high blood pressure, high cholesterol and other cardiovascular risk factors.

      • ‘Fractal dimensions’ are the patterns that characterise the way blood vessels branch out into ever-smaller channels as they get closer to the cells that they feed. Abnormal patterns may mean that disease is disrupting the normal development of blood vessels.

      • The ‘density’ and ‘calibre’ of blood vessels can be used to assess a range of conditions, including age-related macular degeneration (AMD), retinal vein occlusion, diabetic retinopathy, inflammation, high blood pressure, high cholesterol and atherosclerosis (blockages which can lead to blood clots and which, if present elsewhere in the body, could cause heart attacks and strokes).

      • Cup-to-disc ratio can be used to assess the progression of glaucoma and schizophrenia.

Diagram showing the features measured by Automorph, including tortuosity, vessel calibre, disc-to-cup ration, etc.

Why has AutoMorph been made ‘open source’?

Being open source means that anyone in the world can easily access the computer code behind AutoMorph for free, without having to seek permission. The aim of this is to encourage progress in the emerging field of oculomics, where a lack of access to code and data can hinder researchers, especially if they have limited finances. Although only recently made public, AutoMorph has already provoked interest from leading academics, and Yukun expects it to be useful to researchers around the world.

Can AutoMorph be used as a diagnostic tool in clinics?

Artificial Intelligence systems like AutoMorph are not currently being used in eye clinics in the U.K., although, in places including the U.S., other A.I. systems have already gained regulatory approval. For now, AutoMorph is aimed at helping researchers who want to use the power of machine learning to understand the vasculature of the eye and how it relates to the rest of the body. However, given enough refinement and testing, systems based on AutoMorph could eventually be used as part of routine eye care, helping clinicians to interpret fundus photographs more quickly and accurately.

A screenshot of the AutoMorph interface on Google Collaboratory.

What are the limitations of the AutoMorph?

The AutoMorph algorithm (computer programme) is still being refined by Yukun and his team to make it perform better, especially the image segmentation stage. More work also needs to be done on validating (testing) the system, which means ensuring that it will work consistently on any suitable fundus photograph. Publicly available datasets have already been used to validate AutoMorph successfully, but this needs to be done on population-level datatsets (large-scale sets of data that cover hundreds of thousands of people). The images in these large datasets do not currently have enough detailed human annotation that would allow them to be used as a benchmark to test the algorithm.

What are the next steps for AutoMorph?

Yukun and his team are working with the AlzEye project to help validate the data on a diverse population, which is available from routine eye data collected by Moorfields Eye Hospital. They are also devising cutting-edge techniques of ‘data augmentation’, which means that existing sets data can be slightly modified to increase the amount of data artificially, which will help to train the algorithm and improve its accuracy even further.

Find out more


Alongside Yukun Zhou, the following authors contributed to the paper: Siegfried K. Wagner; Mark A. Chia; An Zhao; Peter Woodward-Court; Moucheng Xu; Robbert Struyven; Daniel C. Alexander; and Pearse A. Keane.

Funding acknowledgements

The AutoMorph project is supported by grants from the Engineering and Physical Sciences Research Council; by the National Institute for Health and Care Research (NIHR) Moorfields Biomedical Research Centre; by an MRC Clinical Research Training Fellowship (to Siegfried Wagner); by a Moorfields Eye Charity Career Development Award (to Pearse Keane); and by a UK Research & Innovation Future Leaders Fellowship (to Pearse Keane).