This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.
Sony wants to be a leader in “ethical” artificial intelligence amid cynicism about companies using the technology for societal good.
In recent years, big tech companies like Google and Facebook have promoted ethical A.I. research, in which in-house teams are given the freedom to publish papers that reveal flaws in the A.I. software of their employers and other organizations. One goal is to use the research to create better products. A paper highlighting the problems a voice-translation service has in understanding Singaporean-accented English, for instance, could spur a company to create software that works well for everyone and not just people who speak American English.
But in practice, companies have trouble promoting ethical A.I. research because it can conflict with their core business. Critics have slammed Facebook, for instance, for steering its A.I. ethics team away from projects intended to curb the spread of misinformation because such efforts may halt user growth and engagement, as MIT Technology Review previously reported.
Most recently, critics have pummeled Google for ousting high-profile A.I. researcher Timnit Gebru after she wrote a critical paper. The paper detailed racial bias and problems related to the huge consumption of energy that result from the use of large language models that spit text back at people based on what they write. It reflected poorly on Google because it seemed as if the search giant did not take Gebru’s work seriously and that it wanted to avoid criticism.
Last year, Sony said it would implement “A.I. ethics assessments” to investigate how certain A.I.-powered products could pose societal harm, said Alice Xiang, a Sony AI senior research scientist. Sony’s camera division, for instance, has been developing sensors to power computer-vision tasks, like recognizing cars in videos or photos. Part of Xiang’s work is to help Sony study how to mitigate potential racial bias problems created by such systems, which have shown to be better at recognizing white men than women and people of color. By working with Sony’s business units “who are struggling with these issues,” Xiang hopes the company can prevent A.I. ethical disasters.
Like other tech companies, Xiang’s team plans to publish papers about what it finds, but Sony is still debating how much it wants to share about its internal work. In theory, it could lead to Sony abandoning certain products, but Xiang doesn’t want to be put in a position where she has to publicly “point a finger at someone and be like, ‘Yeah, this product was unethical.’”
It’s this kind of struggle—companies wanting to publicize their A.I. ethical research work without revealing details about their internal business decisions— that complicates matters. A lack of transparency is one reason there is skepticism about corporate A.I. ethics. It’s easy for companies to make a vague statement like “A.I. can be used for good,” but it’s difficult for them to say anything more substantial.
But Xiang is optimistic that Sony’s A.I. ethics research will have an impact rather than be window dressing. Sony isn’t interested in A.I. ethics merely to quell “PR blowouts” and instead aims to “integrate ethics by design,” she said, meaning that the company will review products thoroughly before debuting them.
“I think Sony is in a unique position where we haven’t had all of this negative PR,” Xiang said, obliquely contrasting her company with others that have suffered A.I. meltdowns like chatbots learning to parrot offensive phrases from Internet trolls. “We’re doing this at an early stage, voluntarily, because we see that if we want to really be competitive globally and sustainably in the long term, then thinking about this from the get-go is really important.”
Jonathan Vanian
@JonathanVanian
jonathan.vanian@fortune.com
A.I. IN THE NEWS
Europe's tough stance on faces. Privacy regulators in Europe are proposing a ban on facial recognition systems that would prevent the use of the technology in public spaces, Fortune’s David Meyer reported. The proposed facial recognition ban by the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) goes even further than previously proposed European A.I. regulations, which would have let law enforcement use the technology in certain cases. “A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for A.I.," said EDPB chair Andrea Jelinek and EDPS Wojciech Wiewiórowski.
The federal government's facial recognition push. The U.S. Government Accountability Office published a report about the use of facial recognition technologies by federal agencies. The report found that at least 20 agencies are using the software, three agencies used the tech to scan photos of people involved with the U.S. Capital attack, and six agencies used the software “to help identify people suspected of violating the law during the civil unrest, riots, or protests following the death of George Floyd in May 2020.” The GAO recommends that federal agencies figure out how to better manage their use of facial recognition software and to “assess the risks of using such systems, including privacy and accuracy-related risks.”
Amazon’s algorithmic tentacles. Contract delivery drivers for Amazon spoke to Bloomberg News about the online retail giant’s use of algorithms that can track workers and fire them if the software deduces they did a poor job. The article said that “Chief Executive Officer Jeff Bezos believes machines make decisions more quickly and accurately than people, reducing costs and giving Amazon a competitive advantage.” An Amazon spokesperson said the contract delivery drivers’ complaints were “anecdotal” and don’t reflect the “vast majority” of its drivers.
An A.I.-powered Library of Congress. The Library of Congress is exploring how neural networks—software designed to loosely mimic how the human brain learns—can improve “search results that might be relevant to the user even if they don’t match the exact search term,” The Wall Street Journal reported. Because of the ability of neural networks to quickly find patterns in text data, a person who searches for the word “liberty” could receive a list of results that contain associated words like “liberation” or “freedom,” the report said. A Library of Congress spokesperson told the Journal that it spent about $200 million in IT in 2020, and uses the cloud computing services of Amazon, Google, and Microsoft.
EYE ON A.I. TALENT
Amazon hired Ken Washington as a vice president of software engineering, according to the Detroit Free Press. Washington was previously chief technology officer of Ford, which he joined in 2017 from Lockheed Martin Space Systems.
Outset Medical added Jean-Olivier Racine as the healthcare technology company’s CTO. Racine spent nearly a decade working at Amazon, most recently as the head of engineering and science of AWS Health AI.
Trucking and logistics startup Convoy hired Dorothy Li as CTO, tech publication GeekWire reported. Li was a longtime Amazon employee who was most recently an AWS vice president of business intelligence and analytics services.
EYE ON A.I. RESEARCH
A.I. for better air. University of Houston researchers published a paper in Scientific Reports—Nature about using neural networks to better predict ozone levels with a high level of accuracy up to two weeks in advance, an improvement from current forecasting methods that predict ozone levels three days ahead. The findings, based on a variety of meteorological data, could help scientists better predict air quality nationwide. Although the study predicted ozone concentrations, the researchers said that it “can be extended to various other pollutants.”
From the paper: The current systems for air quality prediction are either a short-term forecasting system or a low-accuracy system that covers a more extended forecasting period. Since this model provides a reasonable forecast two weeks in advance, it can provide an actionable window within which government agencies can deploy effective measures for reducing the occurrence of extreme ozone episodes.
FORTUNE ON A.I.
Amazon Web Services and Salesforce are deepening their ties to fight Microsoft and Google—By Jonathan Vanian
Sir Richard Branson enters the billionaire space race—By Nicole Goodkind
Google and Microsoft’s venture capital arms are pouring $120 million into data-cruncher Incorta—By Jonathan Vanian
U.S. chipmaker invests $4 billion in Singapore, even as Congress tries to lure manufacturers home—By Eamon Barrett
BRAIN FOOD
The day Pepper died. It wasn’t too long ago when Japanese conglomerate Softbank was heavily promoting its Pepper humanoid robot as breakthrough technology. Softbank pitched the Pepper—which resembled a pale, four-foot tall version of the video game character Mega Man—as a companion to the elderly, a capable bank assistant, a greeter at hospitals, and “a teacher of schoolchildren.”
But alas, Softbank has stopped making the Pepper robot as it downgrades its ambitions to build “human-like machines that could serve customers and babysit kids,” Reuters reported. The news should not come as a surprise for anyone who has actually interacted with Pepper.
Over the years, I’ve “talked” with Pepper on multiple occasions during technology trade shows and while staying at a Las Vegas hotel, where the robot functioned as a greeter. Pepper was adorable, for sure, but its capabilities were lacking. I remember one time a frustrated trade show booth attendant forced multiple repeated smiles to trigger Pepper to respond to her. After several smiles failed to “wake” Pepper, the attendant gave up, leaving Pepper looking even more lifeless and inanimate than it already was.
Indeed, Reuters reported that Pepper’s “sales were impacted by its limited functionality and unreliability.”
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.