Despite all those rapidly spreading fears about artificial intelligence making people redundant and potentially extinct, the technology’s developers remain deeply reliant on human labor—and are apparently not very good at getting the best out of their hidden workers.
That’s what I’ve taken away from a couple of interesting articles published in the last day. The first piece is a collaboration between New York Magazine and The Verge, in which writer Josh Dzieza looks into the growing ranks of A.I. annotators—people who have the tedious, poorly paid, and sometimes baffling task of sorting and labeling imagery from photos and videos, so A.I. knows what’s what.
Dzieza himself signed up to annotate stuff for Scale AI, which sells data to OpenAI among others. He found himself having to grapple with 43 pages of very specific directives—“DO label leggings but do NOT label tights…label costumes but do NOT label armor”—that shows how “the act of simplifying reality for a machine results in a great deal of complexity for the human.” Those performing the labor in Kenya are getting paid as little as a dollar an hour, which isn’t exactly likely to elicit the sort of dedication that’s needed to correctly recall and apply such complex instructions.
As my colleague Jeremy Kahn noted in yesterday’s Eye on A.I. newsletter, many enterprising contractors doing this sort of labeling through Amazon’s Mechanical Turk platform have started using A.I. to do their work for them. It’s an understandable hack of what sounds like an incredibly soulless job, but it’s likely to end up worsening the quality of the resulting data.
Meanwhile, The Register published an interview with one of the former employees of a data outfit called Appen, who say they were illegally fired for pushing back over their working conditions. Ed Stackhouse, who wrote to Congress about their concerns before his firing, claims contractors hired to assess the accuracy of Google Bard responses have to do so at excessive speed.
“You can be given just two minutes for something that would actually take 15 minutes to verify,” Stackhouse told the British tech outlet, adding that this hasty feedback could lead Bard to give people bad advice about prescriptions, or to misrepresent historical facts: “The biggest danger is that they can mislead and sound so good that people will be convinced that A.I. is correct.” Google told The Register that Appen was responsible for its working conditions, but did not address the harm concerns. Fortune has asked Appen for comment but had received none at the time of publication.
It’s not exactly news that the tech industry can be exploitative and prone to cutting corners, but even if one brushes past the moral implications of such practices, there are unwelcome implications for the end products themselves and the people who use them. Unless the A.I. sector is willing and able to clean up its act, it’s asking for trouble.
More news below.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer
Data Sheet’s daily news section was written and curated by Andrea Guzman.
NEWSWORTHY
Amazon faces FTC lawsuit. The Federal Trade Commission sued Amazon today over accusations that the e-commerce giant enrolled consumers into its Prime service without their consent and made it difficult to cancel subscriptions. FTC Chair Lina M. Khan said “these manipulative tactics harm consumers and law-abiding businesses alike” and said the agency will work to protect consumers from deceptive practices in digital markets. The price of Amazon Prime is $139 a year and comes with free two-day shipping and access to platforms like Prime Video and music streaming. But for those who want to part ways with the service, Amazon put hurdles in the cancellation process, according to the FTC, and referred to it as “Iliad,” a nod to Homer’s poem about the decade-long Trojan War.
Slack looks to rehire former staff. Less than six months after Salesforce laid off 10% of its staff, its messaging subsidiary Slack will be hiring a “significant number of new roles” in Q3 on the product development engineer team. As Fortune’s Kylie Robison reports, these roles will be focused on generative A.I. and other Slack features. The hiring plans come as Salesforce grows its A.I. efforts with the development of multiple “GPTs” and the recent introduction of a generative A.I. tool for enhancing sales, marketing, and customer service agents’ efficiency known as Einstein GPT.
Gannett sues Google. The largest newspaper chain in the U.S. accused internet giant Google of violating federal antitrust laws by abusing a monopoly over the technology used by publishers to buy and sell online ads. Gannett argued that Google’s dominance over the digital ad market slashed potential revenue, with news publishers seeing an almost 70% decrease in advertising revenue since 2009, causing many newspapers to cease operations. Vice president of Google Ads Dan Taylor called Gannett’s claims “simply wrong,” saying that publishers have other options for advertising technology and that when using Google tools “they keep the vast majority of the revenue.” The New York Times reports that this case follows other complaints brought against Google for its ad practices. That includes the Justice Department filing an antitrust lawsuit against Google in January, and the European Commission filing a similar case last week, and Britain’s antitrust authority’s investigations on Google’s advertising practices.
ON OUR FEED
“You just keep uploading your images and you get your residuals every month and life goes on—then all of a sudden, you find out that they trained their A.I. on your images and on everybody’s images that they don’t own. And they’re calling it ‘ethical’ A.I.”
—Eric Urquhart, a Connecticut-based artist who joined Adobe Stock in 2012 and has several thousand images on the platform. Urquhart is one of many contributors to Adobe Stock, which was used to train the image generation platform Adobe Firefly without notification or consent. Legal experts say artists likely gave Adobe a license for perpetuity and for whatever medium that comes to be invented, but Urquhart told VentureBeat that no one was thinking about A.I. years ago.
IN CASE YOU MISSED IT
Mark Zuckerberg has got $39 billion richer during the A.I. boom. He’s not alone—the world’s über-wealthy have made a killing, by Eleanor Pringle
The Trillion Dollar Club–Plus is up 53% this year, but investors could see a painful, costly fall, by Shawn Tully
Cohere CEO calls A.I. debates on human extinction ‘absurd use of our time and the public’s mind space’, by Steve Mollman
Ford CEO Jim Farley downplays Elon Musk’s new Cybertruck: ‘I make trucks for real people who do real work’, by Eleanor Pringle
BEFORE YOU GO
How to be in a Streamberry original. Black Mirror’s sixth season dropped last week with episodes that featured a corrupt streaming service known as “Streamberry” that looked a lot like Netflix. Viewers quickly took to social media to discuss the episodes and joke about thoroughly reading Netflix’s terms of service after the fictional Streamberry used a quantum computer to make a show detailing the private life of a woman named Joan in the season’s opening episode, Joan is Awful.
Netflix also got in on the joke by launching Streamberry.tv and youareawful.com. While Streamberry.tv is a promotional site directing people to Netflix episodes, the latter website has visitors act similarly to Joan by signing up and agreeing to terms of service that involve having their digital likeness used. After agreeing, fans upload their name and profile photo to see a poster of themselves on the Streamberry show. The website mentions that Netflix can use their image for its marketing campaign and that the photo could end up on a billboard.
This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.