Researchers combine neuroscience, technology to help people with disabilities communicate

Researchers in Barcelona have developed a device that produces sounds from brain signals, with the ultimate goal of developing an alternative communication system for people with cerebral palsy.

Led by Mara Dierssen, PhD, head of the Cellular & Systems Neurobiology group at the Centre for Genomic Regulation (CRG), scientists at the CRG; the research company, Starlab; and the group BR::AC (Barcelona Research Art & Creation), of the University of Barcelona, are testing the device with volunteers who are either healthy or who have physical and / or mental disabilities, working together with the association Pro-Personas con Discapacidades Físicas y Psíquicas (ASDI) from Sant Cugat del Vallès.

A volunteer demonstrates the Brain Polyphony device.

Source: Centre for Genomic Regulation

 

“At the neuroscientific level, our challenge with brain polyphony is to be able to correctly identify the EEG [electroencephalogram] signals — that is, the brain activity — that correspond to certain emotions. The idea is to translate this activity into sound and then to use this system to allow people with disabilities to communicate with the people around them,” Dierssen said in a press release. “This alternative communication system based on sonification could be useful not only for patient rehabilitation but also for additional applications, such as diagnosis.

However, the technological and computational aspects of the process are also challenging, according to Dierssen.

“We have to ensure that both the device and the software that translates the signals can give us robust and reproducible signals, so that we can provide this communication system to any patient,” she said.

Brain polyphony differs from existing sonification systems in its ability to directly “hear” brain waves, according to the release.

“We assign octaves (as they are amplified) until we reach the range audible to the human ear, so that what we hear is really what is happening in our brain. The project aims to achieve this sound and to identify a recognizable pattern for each emotion that we can translate into code words. And all of this happens instantaneously in real-time,” David Ibáñez, MSc, researcher and project manager of Starlab, said in the release.

Reference: www.crg.es.

Leave a Reply

Your email address will not be published.