brain-computer interfaces tagged posts

‘Deepfaking the Mind’ could improve Brain-Computer Interfaces for people with Disabilities

A USC TEAM SUCCESSFULLY TAUGHT AN AI TO GENERATE SYNTHETIC BRAIN ACTIVITY DATA, WHICH COULD IMPROVE THE USABILITY OF BRAIN-COMPUTER INTERFACES. PHOTO/ISTOCK.

Researchers at the USC Viterbi School of Engineering are using generative adversarial networks (GANs) — technology best known for creating deepfake videos and photorealistic human faces — to improve brain-computer interfaces for people with disabilities.

In a paper published in Nature Biomedical Engineering, the team successfully taught an AI to generate synthetic brain activity data. The data, specifically neural signals called spike trains, can be fed into machine-learning algorithms to improve the usability of brain-computer interfaces (BCI).

BCI systems work by analyzing a person’s brain signals and translating that neur...

Read More

Engineers Translate Brain Signals directly into Speech

Figure 1

Schematic of the speech reconstruction method.

Advance marks critical step toward brain-computer interfaces that hold immense promise for those with limited or no ability to speak. In a scientific first, Columbia neuroengineers have created a system that translates thought into intelligible, recognizable speech. By monitoring someone’s brain activity, the technology can reconstruct the words a person hears with unprecedented clarity. This breakthrough, which harnesses the power of speech synthesizers and artificial intelligence, could lead to new ways for computers to communicate directly with the brain...

Read More