Deep learning pioneer Fei-Fei Li has big hopes for healthcare’s future.
Imagine patients wearing A.I.-powered sensors that report back to doctors about whether those patients are following their treatment plans at home. Or sensors that can detect whether patients are about to fall out of bed and, if so, alert their caretakers.
Li is co-director of Stanford University’s Human-Centered AI Institute (HAI), which attempts to bridge A.I. and computer science with other fields including policy and law. She was also the brains behind the high-profile ImageNet dataset and contest, which helped kickstart the rise of neural network software that can learn to recognize what’s in photos like dogs.
Li shared her vision for A.I. in healthcare during an online event last week hosted by her institute and tasks that fall under the umbrella of ambient intelligence, the idea of computers and A.I. humming in the background of people’s lives. It’s a big deal because while A.I. has made advances in recent years performing basic tasks like playing songs when asked, the technology is still in its early stages of development for more complicated tasks.
Companies like Amazon are very interested in ambient intelligence. Last week, I interviewed the online retail giant’s senior vice president of devices, David Limp, during Fortune’s Brainstorm Tech conference in Half Moon Bay, Calif. about his company’s bet that the technology, which it calls ambient computing, will become more mainstream.
For example, Limp said his Amazon Alexa assistant, embedded in smart speakers like the Echo, learned to turn off the lights inside his home at 10:00 p.m. by detecting patterns in his family’s lifestyle. “You just go up to bed and the lights all turn off and you don’t have to think about anything—just because Alexa had this hunch, it’s magical,” he said.
But while Amazon is focusing on Alexa handling more basic tasks in the home, Li’s team at Stanford is hoping that future A.I. will serve as extra eyes for medical staff. Doctors would be able to better monitor a person’s sleep using an array of sensors that are more advanced than Fitbits or Apple Watches, for instance, and more reliable than a patient’s memory.
Li cautioned, however, that “we know how great this technology can be, but we also cannot be naive.” “Any technology is a double-edged sword as a tool,” Li said, touching on some of the ethical risks of ambient intelligence like data privacy. “It can bring on intended consequences.”
She said her work on healthcare and A.I. involves working with ethicists, legal scholars, and other experts who are concerned about the societal dangers of ever-present computing, constantly gathering and analyzing people’s behaviors in the physical world, around the clock. “We don’t pretend as technologists, that we know all the answers,” Li said.
A.I. IN THE NEWS
Intel keeps spinning. Intel plans to spin off its autonomous driving subsidiary Mobileye and recoup the $15.3 billion it spent when it acquired the Israeli company in 2017, Fortune’s Christiaan Hetzner reported. When Mobileye goes public, likely next year, Intel will take the majority of the IPO’s proceeds and will unload its mobility services brand Moovit AV to Mobileye, the article noted.
A quantum application. Quantinuum, a new tech company formed by the merger of Honeywell’s quantum computing unit with a U.K. business, debuted encryption tools that rely on quantum computing to generate a random numbers that the company claims are more secure than more conventional encryption tools. My Eye on A.I. colleague Jeremy Kahn explained that using quantum computing to randomly generate numbers “represents one of the first useful commercial applications of today’s quantum computers, which so far are too underpowered and error-prone to accomplish many of the far-out feats that technologists predict they will one day accomplish, including cracking existing encryption systems.”
A.I.-designed weapons of the future. Chinese naval researchers said they have used A.I. to develop a prototype electromagnetic gun that’s more powerful than conventional weapons, according to a report by The South China Morning Post. Electromagnetic weapons are hard to design because “tiny differences in the size and shape of the coils can make a dramatic difference to performance,” the article stated. A.I., with its ability to learn from previous design mistakes, helped the researchers create their weapon.
What’s up at this A.I. casino? Technical difficulties have plagued the opening of Genting’s Resorts World Sentosa, a casino that’s billed as the world’s first “A.I. casino,” according to a report by The Financial Times. The high-profile Chinese A.I. company SenseTime was supplying much of the A.I. software to power the casino, which the report said is filled with “robot croupiers and cameras that can spot bad behaviour.” Employees who spoke anonymously to the publication said that among the problems were that “cameras existing in the casino did not offer high-quality images, particularly in poorly lit areas such as the car parks.”
EYE ON A.I. TALENT
New Twitter CEO Parag Agrawal has restructured the social media company’s leadership team, appointing Kayvon Beykpour, Bruce Falck, and Nick Caldwel as general managers who will “lead all core teams across engineering, product management, design, and research,” the company said in a regulatory filing. As a result of the shuffling, Twitter’s engineering lead Michael Montano and design and research lead Dantley Davis will leave the company.
7SIGNAL hired Ted Schneider to be the networking technology company’s chief technology officer. Schneider was previously the CTO of IT firm Arcos.
EYE ON A.I. RESEARCH
A.I. as math tutor. Researchers from DeepMind, an A.I. research subsidiary of Google parent Alphabet, published a paper in Nature about using deep learning as a tool in helping mathematicians make fundamental discoveries in their field. In one noteworthy discovery, deep learning helped mathematicians develop a formula related to knot theory, the study of 3D curves that cannot be untangled. In an article about the research published in the New Scientist, the mathematicians originally dismissed the deep learning system’s suggestions because they were “so unintuitive.”
From the Nature paper:
Our case studies demonstrate how a foundational connection in a well-studied and mathematically interesting area can go unnoticed, and how the framework allows mathematicians to better understand the behaviour of objects that are too large for them to otherwise observe patterns in. There are limitations to where this framework will be useful—it requires the ability to generate large datasets of the representations of objects and for the patterns to be detectable in examples that are calculable.
FORTUNE ON A.I.
Intel aims to vault ahead of competition during chip shortage—By Dan Catchpole
Why drones won’t deliver your holiday gifts this year—By Jessica Mathews
Executives love talking about digital transformation, but here’s what many don’t mention—By Verne Kopytoff
Augmented reality goes to work on factory floors and in brain surgery—By Jenna Schnuer
Federal Trade Commission sues Nvidia to block its $40 billion Arm acquisition—By Jonathan Vanian
Waymo’s co-CEO on the next stop for driverless cars: curbside grocery delivery—By Dan Catchpole
A new A.I. research group emerges. High-profile A.I. researcher Timnit Gebru has formed the Distributed AI Research Institute (DAIR) that will produce research about A.I. and act “to counter Big Tech’s pervasive influence on the research, development and deployment of AI,” according to an announcement. Gebru was a former co-leader of Google’s ethical A.I. team who the search giant ousted last year after she and colleagues were in the process of publishing an academic paper that explored energy concerns and bias problems in the large language models created by companies like Google and Facebook. Gebru discussed the A.I. research landscape and what she hopes to accomplish with her institute in an opinion piece published in The Guardian.
From her piece:
I see this monopoly outside big tech as well. I recently launched an AI research institute that hopes to operate under incentives that are different from those of big tech companies and the elite academic institutions that feed them. During this endeavor, I noticed that the same big tech leaders who push out people like me are also the leaders who control big philanthropy and the government’s agenda for the future of AI research. If I speak up and antagonize a potential funder, it is not only my job on the line, but the jobs of others at the institute. And although there are some – albeit inadequate – laws that attempt to protect worker organizing, there is no such thing in the fundraising world.
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.