Hello and welcome to Eye on AI.
As the conversation around generative AI safety continues, a recent report from the UN is applying a specific lens to the risks. Published as a supplement to the UN B-Tech Project’s recent paper on generative AI, the “Taxonomy of Human Rights Risks Connected to Generative AI” explores 10 human rights that generative AI may adversely impact.
The paper says that “the most significant harms to people related to generative AI are in fact impacts on internationally agreed human rights” and lays out several examples for each of the 10 human rights it explores: Freedom from Physical and Psychological Harm; Right to Equality Before the Law and to Protection against Discrimination; Right to Privacy; Right to Own Property; Freedom of Thought, Religion, Conscience, and Opinion; Freedom of Expression and Access to Information; Right to Take Part in Public Affairs; Right to Work and to Gain a Living; Rights of the Child; and Rights to Culture, Art, and Science.
In many cases, the report adds additional nuance to issues people are already talking about, such as generative AI’s impact on creative professions and how it can be used to create harmful content, from political disinformation to nonconsensual pornography and CSAM (child sexual abuse material). Compiled all together, the list of over 50 examples of potential human rights violations creates a striking picture of what’s at stake as companies rush to develop, deploy, and commercialize AI.
The report also asserts that generative AI is both altering the current scope of existing human rights risks associated with digital technologies (including earlier forms of AI) and has unique characteristics that are giving rise to new types of human rights risks. For example, the use of generative AI for armed conflict and the potential for multiple generative AI models to be fused together into larger single-layer systems that could autonomously disseminate huge quantities of disinformation.
“Other potential risks are still emerging and in the future may represent some of the most serious threats to human rights linked to generative AI,” it reads.
One risk that stuck out to me was surrounding the Rights of the Child: “Generative AI models may affect or limit children’s cognitive or behavioral development where there is over-reliance on these models’ outputs, for example when children use these tools as a substitute for learning in educational settings. These use cases may also cause children to unknowingly adopt incorrect or biased understandings of historical events, societal trends, etc.”
The report also notes that children are especially susceptible to human rights harms linked to generative AI because they are less capable of discerning between synthetic content and genuine content, identifying inaccurate information, and understanding they’re interacting with a machine. It makes me think of how young children were given daily access to social media without virtually any transparency or research into how it might impact their development or mental well-being. As a result of social media companies’ recklessness and an almost total lack of guardrails surrounding the technology, children were harmed—an issue that came to a head earlier this year when the CEOs of Meta, Snapchat, TikTok, X, and Discord testified before Congress in a heated hearing that looked at social media’s role in child exploitation as well as its contribution to addiction, suicide, eating disorders, unrealistic beauty standards, bullying, and sexual abuse. Kids were treated as guinea pigs on Big Tech’s social media platforms, as critics and parents often say, and it’d be shameful to repeat the mistake with generative AI.
The section on the Right to Work and to Gain a Living was also interesting and increasingly relevant, exploring how Generative AI could drastically alter economics, labor markets, and daily work practices, and the disparate effects this could have on different groups. This includes everything from employers using generative AI to monitor workers to the exploitative nature of data labeling work required to create large language models and implications for workers’ rights, such as the fact that workers engaged in labor disputes with employers may be at heightened risk of being replaced with generative AI tools.
One thing that’s clear from the report, however, is the extent to which these potential human rights violations are not inevitable, but depend on our own implementation of the technology and what guardrails—or lack thereof—we put around it. Generative AI as a technology won’t on its own commit these more than 50 human rights violations, but rather powerful humans acting recklessly to prioritize profit and dominance will.
Now, here’s some more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
AI IN THE NEWS
Justice Department, FTC agreement clears the way for antitrust investigations of Nvidia, Microsoft, and OpenAI. The two government bodies reached an agreement on how to split antitrust probes of some of the key companies involved in the generative AI boom, clearing the way for these investigations to proceed. The Justice Department will lead the inquiry into Nvidia's potential antitrust violations, while the FTC will focus on the activities of OpenAI and Microsoft. This agreement highlights the momentum building in the Biden Administration’s efforts to address the size and market dominance of major tech companies, with ongoing cases against Google, Apple, Meta, and Amazon. But the probes related to AI also show the government’s increasing willingness to take action to preempt emerging antitrust issues, rather than waiting for more extensive evidence of market harms. Jonathan Kanter, the head of the Justice Department’s antitrust division, said AI’s reliance on vast amounts of computing power and data “can give already dominant firms a substantial advantage,” the New York Times reported.
Nvidia hits a $3 trillion valuation, surpassing Apple as the second most valuable company. Talk about a chart that keeps going up and to the right. Nvidia’s soaring market cap reached a new high Wednesday night, hitting a value of $3.019 trillion. Of course, it’s all because of AI—the company’s A100 GPUs are the backbone of the generative AI boom and the company has an estimated 80% market share in AI chips for data centers, according to CNBC. Microsoft, which has also benefited from the demand for AI, remains the most valuable company in both the U.S. and the world with a market cap of $3.15 trillion as of Wednesday.
Asana announces “AI teammates” to take on project management tasks. That’s according to The Verge. The company says its model will use stored information about past projects to assign work based on who’s best suited for the task. Asana additionally announced a chatbot interface for the model that will allow users to ask questions about the project. AI assistants for the workplace are a hot area for generative AI with all the usual suspects of enterprise software—from Microsoft to Google and Salesforce—having released similar tools.
Wix unveils an AI-powered tool to let users build apps using simple text prompts. That’s according to TechCrunch. The capability is set to arrive this week and offers a chatbot-like interface to let users describe the purpose and aesthetic of their desired iOS or Android app using natural language. Using that information, Wix will automatically generate the app, which users can then further customize. Wix has been a leader in no-code design, which allows people without coding knowledge to create websites and apps by dragging and dropping design elements and providing easy integrations for functions like payments. The ability to create apps just by typing a sentence lowers the barrier to entry even further.
Humane warns AI Pin users of fire safety risk. Users of the $700 AI-in-a-box should “immediately” stop using the charging case that came with the device due to an issue with the battery cell that “may pose a fire safety risk,” the company emailed users, according to The Verge. The device and charging case have been in users’ hands since mid-April, and many have reported issues with overheating. Humane says it’s looking for a new vendor for the affected part and will offer users two months of the subscription required to use the gadget for free. It’s another step backward for the AI gadget maker, which has reportedly been looking for a buyer for the company following the negative reception of the product.
Wired writer says Google’s AI Overviews copied his original work. “Google’s AI feature bumped my article down on the results page, but the new AI Overview at the top still referenced it,” wrote Reece Roger in Wired, showing side-by-side screenshots of the text in an article he published and the extremely similar text provided in an AI overview summary. A Google spokesperson acknowledged that the AI-generated summaries may use portions of writing directly from web pages but defended AI Overviews for how they link back to the original sources. In the case at hand, however, the paragraph with the lifted language was not directly attributed to Rogers and the article was one of six footnotes hyperlinked near the bottom of the result, he said. Google is under growing scrutiny for how AI Overviews uses publishers’ work while simultaneously uprooting the search experience in a way likely to prevent users from ever interacting with those publishers, threatening their businesses.
FORTUNE ON AI
Elon Musk admits diverting Tesla’s AI chips to his other companies, claiming ‘they would have just sat in a warehouse’ —Christiaan Hetzner
Unbabel says its new AI model has dethroned OpenAI’s GPT-4 as the tech industry’s best language translator—by Jeremy Kahn
With $30 billion in lost market value and big shoes to fill, Snowflake’s new CEO bets big on AI—and on big friends like Nvidia’s Jensen Huang —Sharon Goldman
At this gym, customers can choose an AI best friend or drill sergeant —Alyssa Newcomb
Microsoft’s chief scientist: Step aside, prompt engineers—AI will start prompting you instead —Jaime Teevan (Commentary)
AI CALENDAR
June 10: Apple WWDC keynote
June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore
July 15-17: Fortune Brainstorm Tech in Park City, Utah (register here)
July 30-31: Fortune Brainstorm AI Singapore (register here)
Aug. 12-14: Ai4 2024 in Las Vegas
EYE ON AI RESEARCH
No language left behind. That’s the name of a new translation model out of Meta, which the company says includes 200 languages and performs 44% better than prior systems. As described in a paper published yesterday in Nature, the model, also called NLLB-200, was built to apply the translation capabilities around languages that have a lot of training data to translate low-resource languages where training data is limited. According to Meta, the multilingual model contains three times as many low-resource languages as high-resource languages.
Meta is framing the model as a break in the pattern of translation models thus far, which have historically focused on just a handful of languages. This of course has come at the expense of other languages and the populations who speak them, poised to exacerbate digital inequalities in the long run. The model is now available freely for non-commercial use on GitHub.