Artificial Intelligence Will Obliterate These Jobs By 2030
This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.
Cubicle workers. Shipping clerks. Loan processors.
“All gone,” Forrester vice president and principal consultant Huard Smith said in describing the impact of artificial intelligence on various professions by 2030.
Smith’s list included a lot of repetitive, manual work that can be automated with machine-learning software. For instance, Forrester projects that 73% of all cubicle-related jobs—think clerical tasks like data entry—will be automated by 2030, equating to over 20 million jobs eliminated.
Location-based workers, which includes people who work as grocery store clerks, will also be severely impacted by A.I., Smith explained. About 38% of location-based jobs will be automated by 2030, eliminating about 29.9 million positions.
A.I.-powered job loss is already occurring in some job roles, he said, mentioning a grocery store that has eliminated five human jobs with the help of a robot that can scan products on shelves to track inventory. Only one human worker remains to restock the store.
If the next version of the inventory tracking robot can stock shelves, then the grocery store “won’t actually need anyone,” Smith said.
And if you think that learning to code will give you an edge in the future, think again. Smith said that even software developers are at risk, because “coding is going to be automated.”
“So if you got kids in coding schools, you might keep them there [temporarily], but don’t tell them to stay,” Smith told the audience at an A.I. conference in Santa Clara, Calif. last week. “Get them into A.I., because coding isn’t going to be a job in the future.”
Company executives typically downplay the impact of A.I. on jobs and insist that A.I. will create new ones. But A.I. will wipe out 29% of all U.S. jobs while creating the equivalent of only 13%, Forrester projects.
Smith’s frank talk wasn’t meant to be a total downer, but was instead intended to create a sense of urgency about A.I.’s effect on jobs. Speaking to Fortune after his talk, Smith explained that company management should be candid with employees about the impact of machine learning on their jobs and invest heavily in corporate training programs.
U.S. workers are increasingly worried about A.I.’s potential negative effects on their jobs, and company managers need to take their concerns seriously by helping them adapt to the fast-changing world.
“They will bolt if they feel that you are just cost cutting,“ Smith said, referring to companies that are adopting machine learning.
“It will be a difficult 10 years and beyond, and the world doesn’t just stop by 2030, so buckle up,” he warned.
Last week’s Eye on A.I. newsletter was sent with the wrong email subject line. The correct headline was “A.I. Is Everywhere—But Where Is Human Judgment?“
A.I. IN THE NEWS
Marissa Mayer bets on A.I. Former Yahoo CEO Marissa Mayer has revealed that her new startup Lumi Labs will focus on developing consumer apps powered by artificial intelligence technologies, MarketWatch reported. Speaking at a tech conference on Monday, Mayer didn’t reveal too many details about specific products her startup is developing, but said: “The overriding focus is on the grand challenges of A.I. such as self-driving cars and global facial recognition. But there are smaller applications that can be just as useful to benefit people every day.”
Big week for A.I. chips. Intel held a media event in San Francisco last week where it showed off its two new A.I. chips called Nervana Neural Network Processors. One of the chips is for training deep learning systems and the other is used to help computers act on the data, a process called inference. The company also said that the next-generation of its Movidius-branded chips used to power computer-vision tasks would be available during the first half of 2020. Meanwhile, A.I. chip startup Graphcore said that businesses can now access its A.I. chips via Microsoft’s Azure cloud service and that the two companies have been collaborating on improving those chips for natural language and computer vision tasks.
It’s happening again. Over a dozen A.I. researchers from Africa are unable to attend this year’s popular NeurIPS (Neural Information Processing Systems) A.I. conference in Vancouver because the Canadian government is refusing them visas, the BBC reported. The visa issues mark the second straight year A.I. researchers from certain countries have faced difficulties attending the prominent A.I. conference. One researcher from the Black in AI group voiced concern to the BBC about visa issues, and said "It's more and more important for AI to build a diverse body."
Welcome to club Linux. The Open Neural Network eXchange (ONNX), a project created by Facebook and Microsoft to make A.I. models more compatible with multiple open-source tools, is now part of the Linux Foundation’s A.I.-focused group, the LF AI Foundation. This is important because support from the Linux Foundation could help legitimize ONNX as a noteworthy open-source A.I. tool that companies can trust to use for their deep learning projects.
MODERN-DAY LANTERN LAWS?
Tawana Petty, the director of the data justice program for the Detroit Community Technology Project, discussed some of the potential negative consequences of facial-recognition technology on minority communities during a recent tech-policy workshop in San Francisco. She compared facial-recognition tools used for surveillance to 18th century lantern laws, in which “Black people, indigenous, and Mulatto people had to wear a lit lantern in front of their faces whenever they went out in the presence of white people.”
Petty added: “And so [facial recognition] feels like a direct lineage to that kind of thinking and innovating in surveillance.”
EYE ON A.I. TALENT
Harry Shum, a prominent Microsoft executive vice president of the company’s A.I. and research group, will leave the company in Feb 2020, tech publication ZDNet reported. Microsoft chief technology officer Kevin Scott will take over Shum’s duties. It’s unclear where Shum will head to next, but Microsoft said he would still act as an advisor to Microsoft CEO Satya Nadella and founder Bill Gates.
Groq, a startup specializing in A.I. computer chips and related technology, said that HP, Inc.’s supply chain chief Stuart Pann will join the company’s board. Groq CEO Jonathan Ross helped develop Google’s custom A.I. chips known as Tensor Processing Units.
EYE ON A.I. RESEARCH
Rodent research. Researchers from Baylor College of Medicine, Rice University, and the University of Tübingen published a paper about using brain activity from mice to strengthen neural networks that could be fooled by so-called adversarial attacks, which are often created using the same GAN techniques used to develop deepfakes. Tech publication The Register has a helpful explainer on the paper: “In simple terms, the researchers recorded the brain activity of the mice staring at thousands of images and used that data to build a similar computational system that models that activity. To make sure the mice were looking at the image, they were ‘head-fixed’ and put on a treadmill.”
Toxic dataset. Researchers from Jigsaw, a subsidiary of Google-parent Alphabet,released a large dataset containing comments that human annotators sifted through and to determine whether they were toxic and offensive. The Jigsaw team also breaks down how each anonymized human annotator rated a particular comment so that researchers who use the dataset for their own A.I. language projects can understand if those particular annotators might be biased in some way. The goal is to reduce the amount of bias that might creep into content-moderation tools, as in the case of Jigsaw’s tool that rated comments made in the African American vernacular as toxic, likely because many of the human annotators who original rated the comments failed to understand the context of the comments. “By releasing the individual annotations on the Civil Comments set, we’re inviting the industry to join us in taking the first step in exploring these questions,” Jigsaw representatives said in a blog post.
A.I.-powered traffic signals. Researchers from Iowa State University published a paper about using deep reinforcement learning—in which computers learn through trial and error—to create a more capable and adaptive traffic-control system. To train and test the system, the researchers used a popular traffic-simulation program called VISSIM and a dataset provided by the Iowa Department Of Transportation that “contains traffic flow information during the morning peak, evening peak, and the midday duration.”
FORTUNE ON A.I.
Most Executives Fear Their Companies Will Fail If They Don’t Adopt A.I.– By Jonathan Vanian
Why Mercedes’s Self-Driving Trucks Are Set to Overtake Its Robotaxis – By David Meyer
Americans to Companies: We Don’t Trust You With Our Personal Data – By Danielle Abril
Big Tech Is Coming for Your Most Sensitive Data– By Robert Hackett
A.I.’s Power Play. Tech publication VentureBeat published a long examination into the subtle way power shapes and influences the debate around A.I. and ethics. The most powerful tech companies like Google and Amazon, for instance, can choose which A.I. features to develop and which “problems are priorities.” The mere size and influence of these companies gives them an advantage in determining what is an ethical use of A.I., like whether it’s appropriate to sell facial-recognition technologies to law enforcement as in the case of Amazon. “Like the railroad barons who took advantage of farmers anxious to get their crop to market in the 1800s, tech companies with proprietary data sets use AI to further entrench their market position and monopolies,” the article said.
IF YOU LIKE THIS EMAIL...
Share today’s Eye on A.I. with a friend.
For even more, check out The Ledger, Fortune's weekly newsletter on where tech and finance meet. Sign up here.