ALICE the Robot visits Bloomsbury

Image shows Bloomsbury Festival Logo

The INSIGHT PPIE team took part in the Bloomsbury Festival 2021. The festival took place across a week and is focussed on science and art, two of our favourite subjects. The theme this year was Shining Light and we thought we would shine a light on the subject of machine learning and the use of data in healthcare and research.

We had a place alongside other scientists and educators at the Senate House Discovery Hub where, over the course of two days, we introduced hundreds of children and adults to our very own Artificial Learning Intelligent Computer Eye Robot, or ALICE for short.

We have been talking to young people about machine learning with ALICE for some time. At first with the Moorfields Young Persons Advisory Group (YPAG) where we looked at how to best explain the complicated topic and then with a short film using ALICE to illustrate the principles of machine learning, how a computer can learn to recognise images from being ‘taught’ using big data sets of labelled and classified images, like recognising a cat without being able to see all of its features. Please watch our film below.

Image shows two women standing next to Alice the Robot model

We took this one step further at The Discovery Hub and took along a life size ALICE who could look at an object and identify it, or not!

We used a camera as ALICEs eyes, linked up to a piece of software which would attempt to recognise the item shown. The software had varying levels of success and this is where we had our most informative conversations.

If the object was instantly recognised, why?

If the object was identified as it’s materials, why?

If the object was identified as something else altogether, why?

Did the software recognise humans?

How do human brains work to identify things and how did they think this differs with machines?

This led to us talking about the computer needing to see lots of labelled images of the same object to be able to confidently label a new image it sees. For example, a plastic toy cow was sometimes identified as a cow figurine, sometimes identified as a plastic toy, sometimes identified as a penguin. This was dependent on the angle of the image which was shown to the software. This opened a conversation about how a computer recognises shape and materials and then onto how you can teach a computer to learn more.


We also talked about how researchers and healthcare professionals were using big data sets like OCT scans (Optical Coherence Tomography) this way to be able to identify eye conditions earlier.

We had a brilliant time with all the people we met and spoke to. It was rewarding to see adults and children join all the dots together themselves and really start to understand the processes involved in machine learning in healthcare. We were delighted to know that lots of people will take ALICE back to their schools and homes to start to learn more about the subject and how we can make a difference using this tool to advance scientific discovery.

Image shows adult talking to a child with a Robot model between them