Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

Eye on A.I.— Facebook and Google’s Fierce and Nerdy Rivalry Over A.I. Software

May 7, 2019, 1:21 PM UTC
Facebook and Google's Fierce and Nerdy Rivalry Over A.I. Software.
Facebook and Google's Fierce and Nerdy Rivalry Over A.I. Software.
picture alliance picture alliance via Getty Image

Facebook and Google aren’t merely competing for dominance in online advertising. They’re also battling over artificial intelligence.

At the core of their fight is the underlying software for creating neural networks, the software that learns on its own to recognize patterns within data. Although neural networks have been around for decades, it wasn’t until recently that researchers discovered that the technology could be useful for helping computers with tasks like automatically translating languages and recognizing people and objects in photos.

Currently, there’s no easy way for companies to create neural networks, like there is for developing more conventional software. But, in recent years, several big companies have introduced so-called developer “frameworks” that are a first stab at helping coders build neural networks and at making the technology more practical to use widely in apps.

In 2015, Google debuted TensorFlow, the leading toolkit for building neural networks, according to researchers who track the software. TensorFlow’s popularity has so far eclipsed rival toolkits from the likes of Amazon and Microsoft.

Until now, that is.

Last week’s annual Facebook developer conference highlighted several updates to PyTorch, Facebook’s neural network construction kit that succeeded an older and less sophisticated version. PyTorch has emerged as one of the fastest growing free open source technologies, according to some surveys, and is used by a handful of big companies like Genentech, Toyota, and Airbnb.

Although Facebook, like Google, makes no money from its A.I. tools, the company benefits from outsiders using them. For one, the more A.I. researchers use PyTorch, the more Facebook has a pool of A.I. talent that is familiar with its technology and is therefore more attractive to recruit.

Additionally, like many open-source technologies, PyTorch should improve over time as more of its users share feedback with Facebook. And while Facebook, Google, and others would never publicly admit it, they want to be perceived as leading A.I. firms, and the more their A.I. tools are adopted by third parties, the more they can claim the A.I. crown.

Still, Facebook’s director of applied machine learning Srinivas Narayanan told Fortune that it’s still the early days of PyTorch and A.I. tools overall. Despite the buzz, there is still no effort to create technical standards around PyTorch or to create independent groups that would oversee the technology, like how the Linux Foundation oversees an umbrella of open source technologies.

But companies would be wise to pay close attention to the A.I. toolkit wars, because the outcome could be crucial to their A.I. ambitions.

Jonathan Vanian

Sign up for Eye on A.I.


We need some standards. The National Institute of Standards and Technology is asking companies, academics, and other groups to help the federal government create technical standards around artificial intelligence. The creation of A.I. standards follows a recent White House executive order that is intended to ensure that the U.S. leads the world in A.I.

How Oregon sheriffs use Amazon’s A.I. The Washington Post profiled some Oregon law enforcement workers who use Amazon’s Rekognition video and image analysis technology to catch criminals. It’s difficult to tell how much of an impact the technology has had because, as the article said, “Deputies don’t have to note in arrest reports when a facial-recognition search was used, and the exact number of times it has resulted in an arrest is unclear.”

Tell the cow to “say cheese.” Facial-recognition technologies have a harder time identifying animal faces than human faces because of a lack of available animal selfies to train the A.I. systems, reported The Wall Street Journal. The article highlights the often-amusing ways companies are increasingly taking photos of livestock to improve animal facial-recognition technology in farms.

Two-drink minimum. Stanford University researchers are attempting to create A.I. systems capable of telling original jokes, according to Wired. It’s a major challenge because while neural networks currently work well at generating phrases that loosely imitate text they were fed, they have more difficultly creating their own original zingers.


“As machine learning becomes more powerful, it will be cheaper and easier to monitor workers in that way,” Brishen Rogers, a law professor at Temple University, said in recently in Fortune about labor unions pushing back against workplace technology. “Companies are going to be gathering more and more data about how employees are preforming on their job and they will use that data to reduce labor costs wherever they can.”


New York City Mayor Bill de Blasio has chosen John Paul Farmer to be the Big Apple’s chief technology officer. Farmer was previously the director of technology and civic innovation for Microsoft and the senior advisor for innovation in the White House Office of Science and Technology Policy during President Barack Obama’s term.

WellStar Health System hired Shalima Pannikode as senior vice president, chief information and digital officer. Pannikode was previously vice president of information technology at health insurance provider Anthem.

Hearst Corporation picked Mahendra Durai as chief information officer. Durai was previously the senior vice president and chief information technology officer for CA Technologies.


A.I.-powered morphine drip. Researchers from institutions including the Harvard-MIT Health Sciences and Technology program and Massachusetts General Hospital published a paper about using reinforcement learning—in which computers learn by trying—to determine appropriate amounts of morphine to give patients in intensive care. The researchers said the goal of the proposed A.I.-powered morphine system “would not be to replace physicians’ clinical judgments about treatment, but to aid clinical decision making with insights about optimal decisions and automatically guide therapy.”

A.I-powered garbage cleaning robot. Researchers from Beihang University’s School of Electronic Information Engineering in China and Beijing-based company CloudMinds Technologies published a paper about a robot that can autonomously move on grass and collect trash. The researchers trained the robot’s deep-learning systems to recognize and pick up garbage on a playground without the help of humans.


How One Company Is Using A.I. to Increase Security for a Christchurch Mosque – By Emma Hinchliffe

Twitter and Instagram Are Starting to Imagine a World Without 'Likes' – By Alyssa Newcomb

5 Takeaways from Mark Zuckerberg's F8 Keynote – By Danielle Abril


When A.I. knows too much (or too little) about you. Fortune’s Michal Lev-Ram reported on a recent event at the Stanford Institute for Human-Centered Artificial Intelligence at which deep learning expert Fei-Fei Li and philosopher and historian Yuval Noah Harari spoke about A.I.'s future. Harari voiced concern about advertisers and authoritarian regimes knowing people better than themselves because of how certain A.I. can track people’s online behaviors. Li, a big proponent of A.I., wryly countered the doom and gloom, saying, “I’m very envious of philosophers, because they can propose questions and crises, but they don’t have to answer them.”