CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Artificial Intelligence and the need for speed

February 18, 2020, 4:02 PM UTC

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

Artificial intelligence’s future hinges on the Internet getting faster for both consumers and businesses.

Networking giant Cisco released its annual report about the Internet on Tuesday, and like in previous years, some of the conclusions were obvious: more people will get online in the coming years using more devices, while Internet speeds increase.

But Cisco executive Thomas Barnett said that the growth—by 2023 there will be 5 billion Internet users (up from 3.9 billion in 2018), 29.3 billion web-connected devices (up from 18.4 billion in 2018), and broadband speeds of 110 mbps (versus 45.9 mbps in 2018)—is significant beyond the fact that numbers are simply getting bigger. To handle the load, telecommunication companies will have to increasingly use machine learning as a traffic controller.

Cisco’s report found that 61% of telecom providers plan artificial intelligence projects for so-called edge computing, a buzzy tech-industry term for crunching data near where the data is generated and used instead of at a cloud data center that is likely hundreds of miles away. Walmart’s experimental Levittown, N.Y. store that’s outfitted with cameras and over 100 servers to better track inventory and customers is one example of edge computing.

But in order for edge computing to gain greater momentum, the Internet must get faster, which is why Cisco predicts that mobile carriers will start upgrading their networks to accommodate the increasing demand. And as these big carriers—AT&T and Verizon, for example—expand their Internet infrastructure, they’ll use machine learning to more efficiently distribute Internet access. 

So far, telecommunication companies haven’t yet built the networking infrastructure to accommodate these kinds of futuristic A.I. applications. But Barnett expects that to change, and telecom giants will spend more on 5G and next-generation Wi-Fi.

“Their business depends on it,” Barnett said.

Jonathan Vanian 


Putting the brakes on facial recognition. Senators Cory Booker and Jeff Merkley introduced a bill, the Ethical Use of Artificial Intelligence Act, that would stop federal agencies from using facial recognition technologies until federal guidelines are created, tech news publication VentureBeat reported. The report said that the “bill is being enacted because facial recognition is being marketed to police departments and government agencies, the technology has a history of less accurate performance for people of color and women.” Then bill joins an increasing number of federal A.I.-related bills that have been introduced since 2018.

Facebook heads to Brussels. Facebook CEO Mark Zuckerberg plans to meet with the European Union's competition chief prior to the EU’s executive arm unveiling new A.I. regulations on Wednesday, CNBC reported. “Zuckerberg’s visit also happens at a time when European regulators are assessing whether Facebook’s data practices have disrespected competition law,” the report said.  

More problems for Clearview AI. Clearview AI, the controversial facial recognition startup that’s built an enormous database of faces scraped from the public Internet, is being sued again, tech news publication CNET reported. "Clearview has amassed a database of more than 3 billion photographs that it scraped from sources including Instagram, Twitter, YouTube, Facebook, Venmo and millions of other websites," attorneys wrote in the lawsuit. "Users can take a picture of a stranger on the street, upload it to Clearview's tool and instantly see photos of that person on various social media platforms and websites, along with the person's name, address and other identifying information."

Army and A.I. poisoning. The army is funding research to defend against so-called adversarial attacks, in which hackers inject certain kinds of data intended to mistrain or corrupt deep learning systems, government IT publication Fedscoop reported. Although it’s unclear whether the army has experienced actual hackers using such techniques to infiltrate or tamper its A.I. systems, the mere notion of the attacks is enough for the military to work on preventive measures. Some of the software the army is developing is “designed to detect potential backdoors in a database and then instruct the algorithm to unlearn connections it may have picked up from the bad data,” the report said.


Microsoft CEO Satya Nadella reflects on the tech industry regaining trust with the public and mentions A.I. as one way companies can regain good will. Nadella tells Fortune:

You are introducing a model that, say, is built on a human corpus of language—it’s going to pick up a bunch of bias based on the data it trained on. The first way to protect against that is by having a diverse team building the model in the first place. Let’s not abdicate control. Do we have the internal processes to ensure more diversity in our teams? We have engineering processes for doing secure code—what is the moral equivalent?


A guide to deep learning and finance. Researchers from TOBB University of Economics and Technology in Ankara, Turkey published a paper that surveys multiple deep learning research papers pertaining to financial applications like fraud detection, risk assessment, and stock market predicting. The researchers discovered some interesting facts from the survey. For instance, they found out that the python programming language is the most popular developer environment used by A.I. researchers for deep learning finance models and that long short-term memory (LSTM) architecture is popular for A.I. finance researchers because of “its well-established structure for financial time series data forecasting.”

A.I. and the quest to remember. Researchers from Google’s DeepMind unit published a paper and related dataset detailing new software architecture intended for researchers to create more capable A.I. language models. The researchers’ proposed “Compressive Transformer” is partly based on the notion that when people read books they build a “compressed representation of the past narrative” rather than memorizing every single detail in the book in order to understand it.


Putting politics aside to close the skills gap—By Alan Murray

ARM unveils two new A.I. computer chip designs—Jeremy Kahn

White House proposes big increase in A.I. and quantum spending while cutting other sciences—By Jonathan Vanian

Did the ‘techlash’ kill Alphabet’s city of the future? —By Robert Hackett


On China, smart cities, and surveillance. As part of Fortune’s special report on “Rethinking the City,” Grady McGregor reports on the China metropolis Shenzhen, which has become a poster-child of sorts for the nation’s push in A.I. and so-called smart city technology. As the report explains, city officials “collect massive troves of data to manage traffic congestion, pollution, and resources like water and electricity” as a way to more efficiently manage city infrastructure. But the constant data gathering and tracking to create “civic efficiency” comes at the price of personal privacy and constant surveillance. Now, even Shenzhen’s city government is reflecting on the tradeoff and is “cosponsoring with Hong Kong an exhibition called ‘Eyes of the City.’”

From the article:

The show, which runs through March, features installations examining the role of technology in urban life. One such work invites viewers to reflect on facial recognition, and asks whether they wish to be tracked with the technology through the exhibition. According to organizers, a large majority of participants opt out—a sign that Chinese citizens may be less comfortable with such tracking than conventional wisdom suggests.