CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Who does A.I. think will win today’s election?

November 3, 2020, 6:05 PM UTC

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

Well, it’s Election Day in the U.S. Since that’s probably all anyone in the world wants to read about today, we here at Eye on A.I. won’t try to fight the trend.

What does A.I. have to say about who will win the election? There are a number of artificial intelligence systems that claim to have a good track record at predicting election outcomes.

Most of these systems look at data from social media and then use sentiment analysis—which categorizes the emotions expressed in a text—to figure out whether a particular post is in favor or against a certain candidate. The systems then try to find a correlation between the quantity and quality of these expressions and voting patterns.

Studies have shown that A.I. systems designed in this way can predict election outcomes, sometimes more accurately than polls. In 2016, several A.I. systems based on social media analysis accurately forecast Donald Trump’s victory over Hillary Clinton, even though most poll-based forecasts put Clinton in the White House.

The strength of these A.I.-based forecasting tools seems to hold, for the most part, around the world: A number of these systems correctly predicted that the U.K. would vote to leave the European Union in 2016, even though polls gave the “Remain” side a narrow edge. A 2016 study found that Facebook posts could be used to predict about 80% of the winners in Taiwan’s parliamentary elections. A 2018 study found that a sentiment analysis-based A.I. model could accurately forecast election results in India and Pakistan.

But such systems aren’t infallible. The same A.I. software that worked well for India and Pakistan, for instance, failed to accurately forecast an election in Malaysia.

So what are such systems saying about today’s vote? A system designed by a company called KCore Analytics forecasts that Biden will win the popular vote handily, but that his Electoral College margin will be razor-thin.

Similarly, the Italian-based A.I. company Expert.AI saw Biden in the lead, but only by a few percentage points—a much smaller margin than the seven-point lead Biden has in an average of national polls.

But Polly Pollster, an A.I. system created by Advanced Symbolics that correctly forecast both the 2016 U.S. presidential election and the 2019 Canadian elections, predicts that Biden will win easily, with Trump having only about an 8% chance of pulling off an upset. This forecast is similar to the non-A.I. one based on combining various state-level polls that is put together by FiveThirtyEight. It predicts that Biden has a 90% chance of winning.

These systems work in different ways. One system used deep learning to successfully predict the Indian election results, categorizing sentiment and then feeding those results into another neural network that correlates that sentiment with an election result.

Expert.ai, on the other hand, uses an older form of A.I., based on encoding human expertise in a knowledge graph, to generate its sentiment analysis, according to Marco Varone, the company’s chief technology officer. The graph is able to identify named entities—such as people, companies and places—better than many neural network-based approaches can, Varone says, and it can better understand complex relationships between words than some large statistical language models.

To guard against bots or other fake accounts potentially skewing its forecast, Expert.ai’s system is trained to weed out social media accounts that seem to only retweet content from other accounts, as bots often do, as well as to identify large number of accounts posting with extremely similar language, Varone says. The company also relies on human screeners to do some of this filtering.

Guarding against bots may be particularly important because, as KCore’s Herman Makse told The Independent, fewer of Trump’s likely voters than Biden supporters are on Twitter. That means tweets from Trump supporters are weighted more heavily in social media-based election forecasting models, so there’s a risk pro-Trump bots could make Trump’s chances appear better than they are.

And none of the election forecasts take into account how legal challenges (which Trump is threatening mount if the results don’t go his way in some battleground states) or “faithless electors” (members of the Electoral College who don’t vote for the candidate they had pledged to) might affect who ultimately is inaugurated on January 20.

And now here’s the rest of this week’s A.I. news.

Jeremy Kahn 
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

Intel acquires SigOpt. The semiconductor giant is increasingly focusing on chips and software aimed at speeding up A.I. applications. Now it has added to a growing stable of A.I.-related acquisitions by snapping up SigOpt, a San Francisco-based company whose software automatically performs the time-consuming and finicky tasks of tuning the hyperparameters, or initial weights, in large neural network models. It claims it can save companies 90% of the cost of building and training such a model. Terms of the deal were not disclosed.

British online grocer Ocado buys two U.S. robotics companies. The U.K. company, known for its highly-automated customer fulfillment centers, announced it is buying Kindred Systems for $262 million and Haddington Dynamics for $25 million in cash and stock. Ocado said it will use the new robots to take even more human workers out of its warehouses, automating both the picking of individual customer orders as well as the unloading of large pallets of items from suppliers. The deals also signal Ocado's move into supplying technology to online retailers outside the grocery industry. I wrote about the development for Fortune here

...But Walmart scales back on robots in its stores. The retail giant announced it would stop using robots to rove the aisles of its stores and check inventory levels after finding that it got similar results with its human employees. The decision is a big blow to Bossa Nova Robotics, a spin-out from Carnegie Mellon University's esteemed robotics department, which had planned to have its six-foot-tall stock scanning robots in about a quarter of all Walmart stores. The robotics firm was forced to lay off about 50% of its staff in response to Walmart's decision, according to a story in The Wall Street Journal. 

Accenture forms strategic partnership with the U.K.'s Alan Turing Institute. Accenture said in a blog post that the move was designed to drive further collaboration between cutting-edge data science and A.I. research and industry through the following: shared research projects, the development of use cases for A.I. in industry, internship and scholarship opportunities for master's and PhD-level students within Accenture, and skills training for executives and data scientists. It also said the partnership would help drive the use of A.I. in hubs outside of London, in particular Newcastle, Manchester, and Edinburgh.

South Park creators debut television show built on deepfakes. The creators of the hit cartoon series South Park, Trey Parker and Matt Stone, have created a new series called Sassy Justice in which all of the characters are created using deepfakes, according to a story in The Register. In this case, the title character, a local news reporter named Fred Sassy, has the face of U.S. President Donald Trump, while other characters have the heads of Facebook CEO Mark Zuckerberg, former U.S. vice president Al Gore, Ivanka Trump and Jared Kushner.

A deepfake was used to create the fake author of the dubious Hunter Biden report. Martin Aspen, allegedly a Swiss security analyst who authored a report about the business activities of Joe Biden's son in China, is a completely fictional persona, disinformation researchers have concluded, according to a report by NBC News. The researchers say the report was part of an elaborate attempt to smear Biden. Among many other indications that Aspen is a fiction, the experts concluded that photos of Aspen that appear on various social media pages were created using deepfake technology, the A.I.-based method that can create fictional faces or graph one person's head onto another's body. 

EYE ON A.I. TALENT

Kneron, a San Diego, California-based startup which makes specialized computer chips and other hardware designed to allow A.I. applications to run "on the edge," or on devices ranging from mobile phones to cars to toasters without having to be in constant communication with a remote data center, has hired Alex Lo as chief strategy officer. Lo was formerly corporate vice president of sales at Nvidia in Taiwan.  

Wilson Sonsini Goodrich & Rosati, the well-known Silicon Valley law firm, has hired Josh Kaplan as a partner in its London office. Kaplan was previously the chief operating officer and general counsel at online payments platform Checkout.com.

Employee benefits management company Benefitfocus, based in Charleston, South Carolina, has hired John Thomas to be its chief data officer, according to a report in trade publication AiAuthority. Thomas was previously executive vice president of data science at Red Ventures and is the current president of the Analytics & Big Data Society.

EYE ON A.I. RESEARCH

Here's another one for Election Day. Weights & Biases, a San Francisco company that makes programming tools for people who create neural network-based A.I. systems, has launched a competition for people to develop A.I. that can automatically extract information from government forms about political donations.

As the company explains: "U.S. TV stations are legally required to publicly disclose these ad sales but not to make them machine readable or easy to aggregate. Every election, tens of thousands of PDFs are posted to the FCC Public Files in hundreds of different formats. Sifting through them to understand the larger picture or find interesting trends is an incredibly time-consuming and/or expensive process for investigative journalists and regular folks alike." 

Enter machine learning. To train a deep-learning system to automatically extract meaningful data from these forms, Weights & Biases has compiled a dataset of 20,000 labeled receipts for political ads bought during the 2012, 2014 and 2020 U.S. election cycles. Weights & Biases itself trained a system on 15,000 documents, teaching it to extract the following information: the advertiser, the total paid, the contract number, and the start and end dates of the ad campaign. It then tested this model on the remaining 5,000 receipts in its dataset.

The W&B A.I. model achieved 70% average accuracy across all five fields and 90% on extracting the total amount paid. It is now opening the dataset up to others to try their hand at the problem—with the goal of getting the best accuracy at information extraction from the fewest hand-labeled examples. This is important because, the company says, "There is a very long tail of possible form layout and over 100,000 unlabeled PDFs from the 2020 election ads alone."

FORTUNE ON A.I.

Twitter and a Supercell billionaire are backing a new A.I.-focused venture capital fund—by Jeremy Kahn

4 key moments from the Senate’s showdown with Big Tech CEOs—by Danielle Abril

This startup is using robots and A.I. to design new drugs—by Jeremy Kahn

Microsoft’s cloud could be a bit foggy for the next quarter—by Jonathan Vanian

China’s new five-year plan has an ambitious aim: to become a self-sufficient, global tech superpower—by Grady McGregor

BRAIN FOOD

Jonathan and I have written about the danger of leaving A.I. projects solely in the hands of engineers and data scientists. At the very least, to build successful—and safe—A.I., it is critical to involve domain experts for the area in which the A.I. system will be deployed. Better yet, there should be involvement from as diverse a group as possible.

These points are hammered home in an essay in the Harvard Business Review authored by two top Google executives: Donald Martin Jr., a senior staff technical program manager and social impact technology strategist, and Andrew Moore, the head of Google's cloud A.I. and industry solutions.

One of the biggest dangers that a diverse team can guard against, the two write, are errors in attributing causation. Today's deep-learning systems are excellent at finding subtle correlations in huge datasets. But, as anyone who has ever studied statistics knows, correlation does not equate to causation. 

"AI system developers — who usually do not have social science backgrounds — typically do not understand the underlying societal systems and structures that generate the problems their systems are intended to solve. This lack of understanding can lead to designs based on oversimplified, incorrect causal assumptions that exclude critical societal factors and can lead to unintended and harmful outcomes," Martin and Moore write.

To counter this, Martin and Moore advocate a system in which a diverse team of stakeholders come up with a hypothesis about what factors are actually driving correlations. "This process should happen at the earliest stages of product development — even before product design starts — and be done in full partnership with communities most vulnerable to algorithmic bias," they write.

Their conclusion: "If we, as a field, want this technology to live up to our ideals, then we need to change how we think about what we’re building — to shift to our mindset from “building because we can” to “building what we should.” This means fundamentally shifting our focus to understanding deep problems and working to ethically partner and collaborate with marginalized communities. This will give us a more reliable view of both the data that fuels our algorithms and the problems we seek to solve. This deeper understanding could allow organizations in every sector to unlock new possibilities of what they have to offer while being inclusive, equitable and socially beneficial."