A.I. IN THE NEWS
Facebook's use of A.I. for content moderation under fire for failures. Last month Facebook announced that its automated content moderation systems had gotten good enough that they would take over triaging the posts that are brought to the company's 15,000 human content moderators for review. But the new system doesn't seem to have pleased many folks. Bloomberg reported that many small businesses have had their advertising accounts banned in error by the new software and that they've been unable to get the company to address the problems. Facebook has issued a statement apologizing for "any inconvenience recent disruptions may have caused" but the underlying issue does not seem to have been remedied.
For more about how Facebook is using A.I. across its business and whether it is actually making a dent in the company's massive issues with hate speech, disinformation, phony accounts and more, tune into Web Summit for my fireside chat with Mike "Schrep" Schroepfer, Facebook's chief technology officer, on December 2nd at 7:25 p.m. GMT (2:25 pm EST).
ServiceNow acquires Element AI. The Montreal-based Element AI, which builds machine learning systems for industry customers, is being acquired by ServiceNow, the cloud-based IT services company, for $500 million, according to a story in TechCrunch. The acquisition represents a major push into A.I. for ServiceNow, which is now being helmed by former SAP CEO Bill McDermott, who has made a series of deals recently as he seeks to turn ServiceNow into a one-stop shop for managing companies' digital transformation efforts.
Cerebras claims its massive A.I. computer chip can map fluid dynamics faster than a supercomputer. The Silicon Valley-based A.I. computer chip startup says that its CS-1 system, which consists of a single 18-gigabyte chip that has to be kept in a cooling device about the size of a mini-refrigerator, was 200 times faster at running a complex fluid dynamics simulation than the U.S. Department of Energy's Joule supercomputer, according to a story in tech publication The Register. But the CS-1 was only racing against Joule's largest processing cluster, consisting of 16,384 cores, and not Joule's complete arsenal of 84,000 cores. Plus, "the results should be taken with a pinch of salt," The Register cautions, "as the company has yet to publicly disclose its chip performance in more typical benchmarking tests used for AI and machine learning."
FAA gets closer to approving commercial drones that can operate autonomously. In a move that brings drone delivery operations in the U.S. one step closer to reality, the U.S. Federal Aviation Authority has issued airworthiness criteria for 10 drones, some of which are designed to operate autonomously out of the line of sight of their operators, the agency said. The criteria were issued for drones made by Amazon, as well as startup Airobtoics, Zipline and Wingcopter, among others.
A Supreme Court case could make it easier for researchers to find security flaws in A.I. systems. This week the U.S. Supreme Court heard oral arguments in Van Buren v. United States which will test whether cybersecurity researchers are potentially violating the 1986 Computer Fraud and Abuse Act (CFAA) when they try to find vulnerabilities in existing software and systems. A lower court ruled that this sort of research should not run afoul of the law. If the Supreme Court agrees, it will also make it easier for researchers interested in adversarial machine learning—a field of research which deals with how A.I. systems can be tricked into incorrect classification decisions or predictions. But if the Court reverses the lower court and says that security researchers can be prosecuted for improper use of software, it is likely to have a chilling effect on the field, according to a story in Venture Beat. My colleague Aaron Pressman also has more about the law and the case in Monday's Data Sheet newsletter.
Archaeologists are using machine learning to take the grunt work out of their jobs. A.I. is starting to have a major impact on science, as the DeepMind protein-folding breakthrough shows. But so far, most uses of machine learning in science are less about these fundamental advances and more about process: automating time-consuming and tedious data-collection processes, as a story in The New York Times demonstrates. The paper looks at how A.I. is being used to spot possible Scythian burial mounds in satellite images, count and classify Roman pottery sherds, or identify human bones illegally being sold on the Internet.
EYE ON A.I. TALENT
ABBYY, a digital intelligence and robotic process automation company based in Milpitas, California, has named Weronika Niemczyk as chief people officer, the company said in a statement. Niemczyk previously led human resources Ascential, a British media business specializing in events, exhibitions and festivals.
Orbital Insight, the Palo Alto, California-based satellite imagery analytics company, has appointed Kevin O'Brien as its new chief executive officer, the company said in a news release. O'Brien had previously been the company's chief operating officer. Company founder James Crawford, who had been CEO, is transitioning to become chairman of the board as well as chief technology officer.
EYE ON A.I. RESEARCH
Your fancy-pants sales forecasting A.I. might not be as good as you think. That's the conclusion of a study conducted by researchers from Naver, the South Korea Internet company. In a paper published on the research repository arxiv.org, the Naver team looked at the performance of probabilistic time-series models that have become popular lately for sales forecasting tasks and compared them to a simpler machine learning model and to linear regression. The bad news? Both the simple methods outperformed three supposedly state-of-the-art probabilistic A.I. techniques.
Part of the problem, the researchers report, is that past tests of the probabilistic methods mostly evaluated them on whether they were above or below a certain limit, but not how well they did at forecasting a precise future sales figure. But having an exact forecast "is essential in industries that require specific numbers, such as the number of delivery people in a logistics company." What's more, many of the more sophisticated sales forecasting models had erratic performance on different tests, the researchers found.
They issued a fairly stinging indictment of the way research on probabilistic models has been conducted and suggest that many previous studies may have cherry-picked the data used to evaluate these systems. "Prominent probabilistic time-series models do not work effectively especially for other datasets not used in the original papers," the researchers report.
FORTUNE ON A.I.
As libraries fight for access to e-books, a new copyright champion emerges—by Jeff John Roberts
This new VR simulator helps you prepare for the most awkward office encounters—by Lee Clifford
Why India’s software startups are poised for global dominance—by Atul Jalan and Brewer Stone
Know when to fold ’em: How a company best known for playing games used A.I. to solve one of biology’s greatest mysteries—by Jeremy Kahn
The technological arms race between the U.S. and China over artificial intelligence is growing.
Both nations believe that A.I. could give their respective militaries a big strategic and tactical advantage. But Avi Goldfarb and Jon Lindsay take a sober look at just what kind of military advantage A.I. may convey in a new report published by The Brookings Institution. The answer, they say, has as much to do with the human strengths of the military organization using A.I. as it does the technological capabilities themselves.
"In cases where decision problems are well-defined and plentiful relevant data is available, it may indeed be possible for machines to replace humans. In the military context, however, such situations are rare. Military problems tend to be more ambiguous while reliable data is sparse. Therefore, we expect AI to enhance the need for military personnel to determine which data to collect, which predictions to make, and which decisions to take."
Goldfarb and Lindsay say that it is critical that junior military staff understand the data that is fed into automated decision systems, and how an enemy might try to target or manipulate that data. More human judgment is needed in the lower ranks, not less.
They also write that having better A.I. prediction systems may also lead to a kind of analysis-paralysis on the part of human decision makers: "Ironically, however, the same organizational capacity that enables judgment, and thereby makes war fighting more predictable and controllable, also has the potential to make conflict more ambiguous and less decisive. In short, the ability to automate aspects of decisionmaking can make it harder to come to a decision within an organization or on the battlefield."
Do these same lessons apply in many business contexts? I am always wary about analogies between business and war, and between the organization of militaries and companies, but my guess is that they probably do.