Brain implants have allowed a tetraplegic patient to control a whole-body exoskeleton with brain signals.
A demonstration of the new tech was published in The Lancet Neurology this week.
The patient, who had been left paralysed by a fall, used two wireless chronically implanted brain-computer interfaces to control virtual and physical machines
The development follows a trial that has been going on since 2017 between Clinatec and the University of Grenoble.
Currently, the suit has only been used in lab conditions and the user still needs to be attached to a ceiling by a harness, but the implications for the technology have the potential to be huge and promises to improve the quality of life for patients in the future.
How it works?
The suit works by reading brain activity from surgical implants applied to the surface of the user’s brain. Information from these implants and translated into instructions for the suit by a computer in the lab.
As the computer can then read users brainwaves, it can translate them into basic instructions which the suit can then follow. For example, if the user thinks “walk forward” it can instruct his legs to move forward.
Talking to the BBC, Thibault (the suits first user of the suit) explain that the tech “was like [being the] first man on the Moon. I didn’t walk for two years. I forgot what it is to stand, I forgot I was taller than a lot of people in the room” and that “It was very difficult because it is a combination of multiple muscles and movements.”
The next step for the team is to refine the technology. Currently, they are limited by how much information they can read from a users brain and process in realtime. This is because if the signals take too long to process the user’s brain doesn’t interpret the movement as natural.
The team need to keep the whole process – from reading the brain signals to translating it into the real-world movement – under 350 milliseconds.
Current processing capacity means that out of the 64 electrodes in each implant they team are only utilising 32. That means that if they can make processing the signals more efficient they can read more information and make more complex movements.
The team are now looking at how more powerful computers or AI technology could help them do this, alongside using fingertip controls to allow the user to pick up objects.