A.I. IN THE NEWS
“Sensitive technologies.” Microsoft said it would “divest its shareholding” in the startup AnyVision, which was the subject of an NBC News article published in fall describing allegations that the Israeli company's facial recognition technology was used to secretly monitor West Bank Palestinians. Although Microsoft said that an independent audit of the startup “demonstrated that AnyVision’s technology has not previously and does not currently power a mass surveillance program in the West Bank,” it chose to cut ties with the startup and stop investment in other companies selling facial-recognition technology. “By making a global change to its investment policies to end minority investments in companies that sell facial recognition technology, Microsoft’s focus has shifted to commercial relationships that afford Microsoft greater oversight and control over the use of sensitive technologies,” the company said in a statement on its venture capital website.
A.I.-powered chip design. Google researchers are exploring the use of reinforcement learning—the A.I. technique that helps computers learn through repetition—to help create more energy-efficient computer chips, MIT Technology Review reported. The researcher’s technology helps with the challenging problem of placing components on a computer chip in the most optimal way. As the report describes, the chip-placement process “requires the careful configuration of hundreds, sometimes thousands, of components across multiple layers in a constrained area.”
Here comes another A.I. “framework.” Huawei has open-sourced its MindSpore A.I. toolset for app development, which means coders can now access the so-called framework for free, VentureBeat reported. Institutions like the University of Edinburgh, Peking University, and Imperial College London are backing the A.I. toolset, which joins a growing list of A.I. frameworks like Google’s TensorFlow and Facebook’s PyTorch, used by researchers to create neural networks.
Chinese big data startup gets big funding. MiningLamp Technology, a Chinese startup specializing in big data and A.I., has received $300 million in funding from investors including Chinese tech giant Tencent and Singapore’s Temasek Holdings, reported the South China Morning Post. The report said that the startup’s “fundraising comes amid a global coronavirus pandemic, expected to further dampen China’s funding environment which had already been cooling amid an overall slowdown in the domestic economy as well as an ongoing tech war with the US.”
THE MISSING DATA
Catherine D’Ignazio, a Massachusetts Institute of Technology assistant professor of urban science and planning, talked to The Guardian about the importance of collecting data that’s more representative of society. After all, the accuracy of A.I.-powered systems are determined by the underlying data used to train them. D’Ignazio, who recently co-wrote the book Data Feminism, explains how certain societal groups have been negatively impacted via current data-collection methods:
One way they are losing is that data most of us would think is important isn’t being collected. We have detailed datasets on things like the length of guinea pig teeth and items lost on the New York City subway. But, in the US, missing data includes maternal mortality data, which has only started being collected recently, and sexual harassment data. And so much of our health and medical understanding is based on research that has been done exclusively on the male body.
Building supply chain resiliency
Today, getting goods and services to customers—whether medical personnel on the frontline or citizens at home—quickly and safely is critical. So what should businesses do to respond to changing demands and adapt their supply chains to protect against future disruptions?
Find out more
EYE ON A.I. RESEARCH
Deep learning, what is it good for? Eric Schmidt, the former Google CEO and Alphabet chairman, and Cornell University PhD computer science candidate Maithra Raghu published a paper that details several ways deep learning has been used to aid scientific discovery. The paper is essentially a primer on the various machine learning techniques that have become popular in a recent years and is intended for researchers interested in applying those techniques to their area of study.
Some of the topics the paper details includes strategies related to data augmentation, “an important part of the deep learning workflow” that “refers to the process of artificially increasing the size and diversity of the training data by applying a variety of transformations to the raw data instances.”
The paper also includes some helpful “implementation tips” for people dipping their toes into deep learning:
Explore Your Data
Before starting with steps in the learning phase (see Figure 1), make sure to perform a thorough exploration of your data. What are the results of simple dimensionality reduction methods or clustering? Are the labels reliable? Is there imbalance amongst different classes? Are different subpopulations appropriately represented?
Try Simple Methods
When starting off with a completely new problem, it is useful to try the simplest version possible. (It might even be worthwhile starting with no learning at all — how does the naïve majority baseline perform? For datasets with large imbalances, it may be quite strong!) If the dataset is very large, is there some smaller subsampled/downscaled version that can be used for faster preliminary testing? What is the simplest model that might work well? How does a majority baseline perform? (This ties in settings where the data has class imbalance.) Does the model (as expected) overfit to very small subsets of the data
FORTUNE ON A.I.
Apple releases COVID-19 screening app, website—By Chris Morris
IBM and The Weather Channel debut coronavirus map—By Jonathan Vanian
With 5G, wearable devices are expected to become even more sci-fi—By Jennifer Alsever
Deep learning comes to the human brain. It’s only fitting that this week’s “Brain Food” section highlights Jeremy Kahn’s recent story about a startup attempting to create computer chips that are embedded with human neurons. Yes, that means Cortical Labs is “building miniature disembodied brains” in the hopes of creating more energy-efficient A.I. chips that can “eventually be the key to delivering the kinds of complex reasoning and conceptual understanding that today’s A.I. can’t produce.”
From the article:
Using real neurons avoids several other difficulties that software-based neural networks have. For instance, to get artificial neural networks to start learning well, their programmers usually have to engage in a laborious process of manually adjusting the initial coefficients, or weights, that will be applied to each type of data point the network processes. Another challenge is to get the software to balance how much it should be trying to explore new solutions to a problem versus relying on solutions the network has already discovered that work well.