Patricia Shaw, Lecturer in Computer Science, Aberystwyth University _
June 24, 2019

It’s likely that before too long, robots will be in the home to care for older people and help them live independently. To do that, they’ll need to learn how to do all the little jobs that we might be able to do without thinking. Many modern AI systems are trained to perform specific tasks by analysing thousands of annotated images of the action being performed. While these techniques are helping to solve increasingly complex problems, they still focus on very specific tasks and require lots of time and processing power to train.

If a robot is to help take care of people in old age, then the range of problems it will encounter in the home will vary enormously compared to these training situations. During the course of a day, robots might be expected to do everything from making a cup of tea to changing the bedding while holding a conversation. These are all challenging tasks that are more challenging when attempted together. No two homes will be the same, which will mean robots will have to learn fast and adapt to their environment. As anyone sharing a home will appreciate, the objects you need won’t always be found in the same place – robots will need to think on their feet to find them.

seagate hard drive ad

One approach is to develop a robot capable of lifelong learning which could store knowledge based on experiences, and work out how to adapt and apply it to new problems. After learning to make a cup of tea, the same skills could be applied to making coffee.

The best learning agent that scientists know of is the human mind, which is capable of learning throughout its life – adapting to complex and ever-changing environments and solving a wide variety of problems on a daily basis. Modelling how humans learn could help develop robots that we can interact with naturally, almost like how we’d interact with another person.

Simulating a child’s development

The first question to ask when starting to model humans is, where to start? Alan Turing, the famous mathematician and thinker on artificial intelligence once said:

Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain.

He compared the child’s brain to an empty notebook that could be filled through education to develop an intelligent adult “system”. But what’s the age of a human child that scientists should try to model and install in robots? What initial knowledge and skills does a robot need to start with?

Newborn babies are very limited in what they can do and what they can perceive of the world around them. The muscle strength in a baby’s neck isn’t sufficient to support the head and they haven’t yet learned to control their arms and limbs.

Starting at month zero may seem very limiting for a robot, but the physical constraints on the baby actually help it to focus its learning on a small subset of problems, such as learning to coordinate its eyes with what it is hearing and seeing. These steps form the initial stages of a baby building up a model of its own body, before trying to understand all the complexities of the world around it.

We applied a similar set of constraints on a robot by initially locking various joints from moving to simulate the absence of muscle control. We also adjusted the images from the robot’s camera vision to “see” the world how a newborn baby would – a much more blurry view than adults are used to. Rather than telling the robot how to move, we can allow it to discover this for itself. The benefit to this is that as calibrations change over time, or as limbs get damaged, the robot will be able to adapt to these changes and continue to operate.

Learning through play

Our studies show that through applying these constraints on learning, not only does the rate at which new knowledge and skills are learned increase, but the accuracy of what is learned increases too.

By giving the robot control over when the constraints are lifted – allowing more control over its joints and improving its vision – the robot can control its own learning rate. By lifting these constraint when the robot has saturated its current scope for learning, we can simulate muscle growth in infants and allow the robot to mature at its own rate.

We modelled how an infant learns and simulated the first 10 months of growth. As the robot learned correlations between the motor movements they made and the sensory information they received, stereotypical behaviours observed in infants, such as “hand regard” – where children spend long periods staring at their hands as they move – emerged in the robot’s behaviour.

As the robot learns to coordinate its own body, the next major milestone it passes is beginning to understand the world around it. Play is a major part of a child’s learning. It helps them explore their environment, test various possibilities and learn the results.

Initially, this might be something as simple as banging a spoon against a table, or trying to put various objects in their mouths, but this can develop into building towers of blocks, matching shapes or slotting objects into the correct holes. All of these activities are constructing experiences that will provide the foundation for skills later on, such as finding the right key to fit in a lock and the fine motor skills for slotting the key into the keyhole then turning it.

In the future, building on these techniques could give robots the means for learning and adapting to the complex environments and challenges that humans take for granted in everyday life. One day, it could mean robot carers that are as in tune with human needs and as capable of meeting them as another human.


Cover image by Sandy Spence, CC BY-NC

Source: The Conversation