Facebook is working on augmented reality glasses that would help people type by monitoring their brain signals instead of requiring them to use a keyboard.
And two years into the research, the company said on Tuesday that the sci-fi goal may be achievable.
“We want to be able to give people a hands-free and flexible way to interact with AR glasses that they’ll be wearing all day long in a way that’s private and discrete,” said Mark Chevillet, research director at Facebook Reality Labs, the company’s AR and virtual reality group.
The social media giant first announced its goal to build a wearable device that would detect intended speech from brain activity two years ago at F8, its developer conference. On Tuesday, the company said research it funded in collaboration with the University of California, San Francisco proved that people’s brain activity could be decoded and transcribed into text in real-time.
It’s the first step in Facebook’s larger goal to create a system capable of typing 100 words per minute by reading the brain.
“The UCSF study is proof of concept for us that silent speech is possible,” Chevillet said. “Beyond that, what it tells us is which neural signals are needed to support a silent-speech interface.”
UCSF researchers conducted the study, part of a larger project called Project Steno, on volunteer participants with normal speech that were undergoing brain surgery to treat epilepsy. Researchers with UCSF wanted to find a way to use brain recordings to restore the voices of people who lost their ability to speak.
During brain surgery, electrodes were temporarily placed over portions of participants’ brains to detect activity for about a week’s time. This allowed doctors to monitor their seizures and, at the same time, let researchers understand brain activity related to speech.
Researchers found that brain activity that was recorded while people spoke could be translated into text almost instantly.
“Typically people collect the data and analyze it afterward,” Chevillet said. “But now UCSF is saying they can do it real-time, which makes it interactive.”
David Moses, one of the UCSF researchers, said in a blog post on Tuesday: “This is the first time this approach has been used to identify spoken words and phrases ... In future studies we hope to increase the flexibility as well as the accuracy of what we can translate from brain activity.”
Facebook, which funded the project, said it did not play a role in the study or have contact with the participants.
The next phase of Project Steno is to determine if it’s possible to use brain activity to restore a disabled participant’s ability to communicate. Facebook again is funding the research and also providing a small team of its own researchers that will provide input and engineering support, the company said.
Facebook isn’t the only company working on trying to harness the power of the human brain for advanced technology.
Neuralink, a company backed by Tesla CEO Elon Musk, has been testing a brain sensor that can help detect neuron activity. It’s already being tested on humans.
Meanwhile, Google has been focused on improving its own AR glasses, Google Glass—a product that's geared for corporate customers. The company didn’t respond to an inquiry about whether it plans to work on a brain computer interface.
The research funded by Facebook raises a lot of questions. First, there are privacy concerns about allowing companies like Facebook and Google to have access to the brain. Then there are other ethical questions like how devices would determine the difference between the speech a person wants to make known versus speech a person may want to hold back.
Chevillet said Facebook is aware that there are many more complicated issues ahead.
“We don’t have the ability to answer all the ethical questions or even know what they are,” he said. “But what we can do is … discuss them as part of the conversation.”
And that means taking a transparent approach to the company’s development of these types of products. While the latest research helps Facebook get closer to its ultimate goal of developing a wearable device that can be controlled by the brain, this type of product is still at least 10 years away, Chevillet estimated.
The research proves that it’s possible to translate brain activity for intended speech into to text, he said. But there’s still a lot of work to be done to make that possible without electrodes implanted in the brain.
Researchers from London and Stanford University in 2018 published a paper with the U.S. National Library of Medicine in which they suggest “there is no technology currently available that can record an action potential without the need for major surgery.” It further states that the challenges related to creating this technology will “will not be solved overnight by Silicon Valley enthusiasm and zeal alone.”
Facebook says it's also collaborating with Johns Hopkins University’s Applied Physics Lab as well as Washington University in St. Louis to build brain devices that also could decode words.
“It’s an exciting time for brain computing technologies,” Chevillet said. “We’ve talked about it since the ’70s, but only in last 10-15 years has there been really compelling demonstrations that brain computing is really possible.”