Although much of the focus on artificial intelligence tends to concentrate on developments in the U.S. and China, people would be wise to pay attention to the second most populous country in the world: India.
India is one of the fastest-growing economies in the world, has an established technology industry, and is the fourth largest producer of A.I. research papers (behind the United Kingdom, the U.S., and China), according to a recent study about India’s A.I. potential by Georgetown University’s Center for Security and Emerging Technology (CSET).
Husanjot Chahal, a CSET research analyst and one of the study’s authors, explained that the report is essentially an evaluation of “India’s A.I. capabilities,” as measured through statistics including the number of academic papers and patents that India produces yearly as well as various A.I. investment figures specific to the country.
The researchers learned, for instance, that Indian investors participated in A.I.-related deals in the country worth an estimated $1.2 billion from 2015 through 2019. During that same time period, U.S. financers were estimated to have invested $858 million into Indian companies that specialize in A.I., underscoring the significance of U.S. funding in India’s A.I. ecosystem. Chinese investors, by comparison, only invested an estimated $159 million into Indian A.I. companies during that timespan.
A misunderstanding that some Americans may have about India and A.I. is that the two countries may be heavily competing for A.I. talent, whereas “the fact is, it is more an avenue of cooperation between India and the U.S. then an avenue of conflict,” Chahal said.
There’s a long history of Indian-born technologists who eventually leave the country to pursue PhDs and more advanced computer science-related degrees and credentials in the U.S., essentially showing that “the AI sector in India and the United States is interlinked,” the report explained. The study’s authors believe this A.I. talent relationship is “uniquely beneficial to both India and the United States and is more an avenue of cooperation than conflict.”
So far, Indian lawmakers don’t view this so-called brain drain as a negative in regards to their burgeoning A.I. ecosystem. Indian officials appear to recognize that Indian technologists are likely to seek education and even work in the U.S. because there’s more suitable computing infrastructure in place for them to do cutting-edge research. Currently, India is well behind the U.S. in terms of having access to the cloud-computing services that could help the country’s A.I. technologists conduct more sophisticated kinds of deep-learning research that require lots of computing power.
Interestingly, Chahal said that when it comes to cloud computing in India, the country’s technologists and companies tend to use the services of U.S. cloud providers like Amazon Web Services, Google Cloud, and Microsoft Azure. She hasn’t seen those Indian companies using the services of Chinese cloud giants like Alibaba, which implies that Chinese businesses have had less success pitching their cloud services to Indian entities.
Indian policy officials are also more hopeful about the new Biden Administration’s approach to immigration as opposed to the Trump Administration’s, which temporarily suspended foreign-worker visas last summer.
“There is more hope than fears as of now,” she said regarding Indian technologists coming to the U.S.
A.I. IN THE NEWS
Microsoft goes big into healthy A.I. Microsoft said it would buy the software company Nuance, which specializes in A.I. speech technology, for $19.7 billion, making it the company’s biggest acquisition under CEO Satya Nadella since the company bought LinkedIn for $26.2 billion. Nuance is a nearly three-decade-old company that was once a partner with Apple and helped build the backend infrastructure of Apple’s Siri voice assistant. Analysts believe that Nuance’s technology and relationship with the healthcare industry (Microsoft has been previously working with Nuance to develop A.I. tools for healthcare clients and physicians) could benefit Microsoft as the company continues to invest heavily in courting healthcare clients.
You were told not to but you did anyway. Law enforcement in the East Bay city of Alameda, Calif., has been using facial recognition technology sold by the controversial startup Clearview even though the city banned the use of the technology in December 2019, reported the San Jose Mercury News. An Alameda city official told the newspaper that “he did not know how many Alameda officers may have used Clearview or the circumstances in which they done so, such as whether they decided on their own to use the software or were directed by senior officers.”
More A.I. bias issues at Facebook. Researchers at the University of Southern California discovered that Facebook’s job recommendation systems “were more likely to present job ads to users if their gender identity reflected the concentration of that gender in a particular position or industry.” Because of these A.I.-powered recommendation systems, women were more likely to be shown technical jobs at Netflix instead of companies like Nvidia because the streaming video company has higher levels of female employment, the report said. A Facebook spokesperson told the Journal that the company has “taken meaningful steps to address issues of discrimination in ads and have teams working on ads’ fairness today.”
Self-driving trucks to cruise the public markets. TuSimple, a San Diego-based startup specializing in self-driving trucks, plans to go public, revealing its financials in a document with the SEC. The company’s filings underscore the challenges companies specializing in autonomous vehicles face as they attempt to create sustainable businesses in a still nascent industry. TuSimple brought in $1.8 million in overall sales for 2020, but generated a net loss of $178 million during the same year. The company, through a partnership with trucking giant Navistar, plans to produce its “autonomous semi-trucks for the North American market at scale by 2024,” according to the filings.
EYE ON A.I. TALENT
Quantropi has hired Michael Redding to be the cyber security company’s chief technology officer. Redding spent nearly three decades at Accenture, most recently as co-founder and managing director of Accenture Ventures.
Chesterfield Faring has picked Steven Weiss to be investment banking firm’s CTO and managing director. Weiss has held a variety of roles in several financial institutions, including The Carlton Group.
OneShare Health has named Heather Harrington to be the non-profit’s chief digital officer. Harrington was previously the organization’s executive vice president of marketing,
EYE ON A.I. RESEARCH
A.I. to aid clinical trials. Researchers from Stanford University and Genentech published a paper in Nature about an A.I. tool, released to the open-source community, that could potentially identify more eligible participants for clinical drug trials. The tool was developed to help solve a common issue clinicians face when identifying people who can participate in a drug trial. Oftentimes, if a person is too old or once had a certain ailment, they may be ineligible to join a clinical trial because of potential risks, even though they may possibly benefit.
The A.I. tool analyzes electronic health records in order to “compare the survival outcomes of individuals who did or did not receive a particular approved drug treatment,” according to Nature.
From Nature: Trial emulation such as this can be used to assess the effects of including or omitting eligibility criteria from the original clinical trial. This offers a way to understand how eligibility criteria can be optimized by assessing the effectiveness of the treatment and the trade-offs between trial inclusiveness and participant safety.
FORTUNE ON A.I.
Artificial intelligence isn’t helping you hire the best person for the job—By Fortune Editors
The global chip shortage may be hitting Apple—By Aaron Pressman
With pandemic’s end in sight, Google searches for resorts and hotels are highest in nearly a decade—By Danielle Abril
Data from half a billion LinkedIn users has been scraped and put online—By Jonathan Vanian
Let A.I. fail. Too much blind faith in the capabilities of modern A.I. could create unrealistic expectations for the U.S. military, which could lead to pushback from officials to use the technology if it fails in certain cases, wrote researchers from the Perry World House at the University of Pennsylvania in a piece published by Foreign Affairs. The authors suggest that the U.S. Department of Defense needs to “invest in making testing, evaluation, verification, and validation of new AI applications more efficient,” which could theoretically be applied to nearly every industry that’s attempting to use A.I.
The authors recommend that in order to “strike this balance,” the “U.S. government will need to set more realistic expectations about what AI can do for the military.”
From the article: It must counter the popular focus on the fantastical—lethal autonomous weapon systems and artificial general intelligence, for instance, remain closer to sci-fi than reality—with a carefully calibrated, well-informed, and realistic picture of what AI can actually do.
Still, it's likely that the “realistic picture of what AI can actually do” is actually much more mundane than what military officials may envision.