Avoiding A.I.’s bad reputation

October 26, 2021, 7:57 PM UTC

Autodesk CEO Andrew Anagnost is quick to differentiate the artificial intelligence developed by his company, which specializes in software for architects and engineers, from what online ad companies like Facebook and Google use.

In his opinion, the machine-learning algorithms for showing online ads and attention-grabbing content are giving technology, in general, and A.I. in particular, a black eye. 

“Unfortunately, the advertising model is, I think, it’s a terrible perversion of tech,” Anagnost says. “And it’s done nothing but give tech a bad name.”

As lawmakers scrutinize companies like Facebook, more attention is being placed on the role of their A.I. systems in spreading sensational and misleading content. Some researchers suggest that this A.I.-amplification is partly to blame for our current polarized society, ripe for exploitation by bad actors.

“And that’s why people get itchy now,” Anagnost says about the creepy feelings many people have about A.I. “Because it’s like, are you gonna Facebook me? Are you gonna Google me, you know—what’s going on here?” 

When talking about A.I., Anagnost says it’s important for his company to point out that its products aren’t ad-focused.

“We’re introducing people to the positive applications of A.I. in their day-to-day environment, because we’re not monetizing that information in any way,” Anagnost says.

Autodesk recently debuted a machine-learning feature in its core AutoCAD software that recommends certain keystroke shortcuts or related techniques to help customers use the program more efficiently based on their past behavior. Still, some workers may find the monitoring creepy. Although this feature may be more palatable than A.I.-powered online ad systems, it does represent another way A.I. is being used to track user behavior, creating a potential privacy problem.

Anagnost counters that notion, however, saying that his company’s software doesn’t send any data it collects to managers, who could then use the information to fire low-performing employees. 

“We make it very clear that we’re doing this to improve your usage of the product,” Anagnost says. “And if we really deliver that, [workers] forget very quickly that we’re monitoring how they use the product, and so far, the biggest feedback we’re getting from customers is, ‘Well, this is pretty cool.’”

Jonathan Vanian 
@JonathanVanian
jonathan.vanian@fortune.com

A.I. IN THE NEWS

Self-driving cars go racing. A team from The Technical University of Munich won a $1 million prize in a self-driving car race called the The Indy Autonomous Challenge by recording the fastest two-lap average speed of 135.9 mph. The race was similar to the Indianapolis 500 qualifying races, in which cars zoom solo around the track instead of with others, noted the Indianapolis Star

Insurance takes to robots and drones. Insurance companies like Travelers, United Services Automobile Association, and Farmers are spending more money on drones and robots to inspect properties damaged by storms, The Wall Street Journal reported. Farmers Insurance is using one of the four-legged robots sold by Boston Dynamics to “to access unoccupied, structurally compromised houses and buildings to assess damage,” the report said.

Robot cooks rise again. Some restaurants are testing robots that can make burgers and fry chicken wings to deal with the labor shortage, according to a CNBC report. One restaurant, Buffalo Wild Wings by Inspired Brands, is testing a robot named Flippy sold by Miso Robotics. Still, an analyst from the labor research group EMSI “said that as a whole the industry is not yet able to bring robotics in at a meaningful level,” implying that robot cooks won’t go mainstream any time soon.

Twitter’s algorithmic megaphone. Twitter released internal research showing how its algorithms “amplify tweets from right-wing politicians and content from right-leaning news outlets more than people and content from the political left,” tech publication Protocol reported. Twitter said it does not know why certain tweets are amplified more than others, but future research will explore the question.

EYE ON A.I. TALENT

Security startup Lacework hired John Morrow to be its senior director of engineering. Morrow previously spent over 11 years at Facebook, where he was most recently the director of security engineering. He joins former Facebook vice president of engineering Jay Parikh, who became Lacework co-CEO in July, and former Facebook director of web foundation and site reliability Chip Turner, who became Lacework senior director of engineering in September.

Healthcare startup Thirty Madison picked Matthew Mengerink to be its chief technology officer. Mengerink was previously the vice president of Uber’s core infrastructure engineering group.

EYE ON A.I. RESEARCH

Turning 2D into 3D. Researchers from University of Erlangen-Nuremberg in Germany published a non-peer-reviewed paper about using neural networks to create 3D virtual scenes out of 2D photographs and graphics. You can watch a video demonstration of the technology described in the paper via this article by New Scientist. The video shows how a photo of a train car and a children’s playground can be transformed into short 3D-scenes that can be rotated in different positions to see the objects at different angles.

From the New Scientist article:

The neural network, developed by Darius Rückert and colleagues at the University of Erlangen-Nuremberg in Germany, is different to previous systems because it is able to extract physical properties from still images.

“We can change the camera pose and therefore get a new view of the object,” he says.

The system could technically create an explorable 3D world from just two images, but it wouldn’t be very accurate. “The more images you have, the better the quality,” says Rückert. “The model cannot create stuff it hasn’t seen.”

FORTUNE ON A.I.

The global chip shortage is driving demand for this London startup’s software—By Jeremy Kahn

Researchers: A.I. can play a role in eliminating bias in lending, but only with human help—By Jonathan Vanian

Automation can go hand in hand with job growth, says this CFO—By Sheryl Estrada

Ready or not, here comes Xpeng’s flying car—with wheels, wings, and a parachute—Eamon Barrett

Seagate CEO on COVID-19, crypto, and the global computer chip shortage—By Jonathan Vanian

BRAIN FOOD

A.I.'s video game reviews. Much has been written about progress in natural language processing, a subset of A.I. in which computers can understand and generate human-like text based on specific prompts. Some of the high-profile language models that have caught recent attention include Google’s BERT technology and the GPT-2 and GPT-3 language systems created by A.I. research lab OpenAI.

Now, videogame publication Kotaku decided to test how good A.I. is at generating video game reviews. By feeding several video game reviews to the free, GPT-2 language model, the A.I. software generated its own made-up text to essentially finish the reviews. The author concluded that while the A.I. generated “readable content” in some cases, some of the text showed that A.I. used a “fake it till you make it approach,” in which the sentences seemed correct, but contained numerous inaccuracies.

From the article: But it’s interesting to see just how far that AI-generated content has come. I imagine we’re not far off the day where some outlets or news wires start dabbling with GPT-3 generation for press release material, simply because the sheer volume of content online outstrips the number of people available to write it (but not the potential readership).

When it comes to reviews at least, you can’t beat the human touch. People know best what elements matter to other people. AI will get there one day, but that day isn’t today.

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet