• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
NewslettersEye on AI

A wave of A.I. experts left Google, DeepMind, and Meta—and the race is on to build a new, more useful generation of digital assistants

Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
July 5, 2022, 2:20 PM ET
Photo of the Adept AI founding team.
Founding members of Adept AI, a startup that is among a crop of new companies hoping to use advanced A.I. methods to create digital assistants that can perform business tasks.Courtesy of Adept AI

Alexa, what’s the future of digital assistants? I don’t how Alexa would answer that question. But looking at the number of top A.I. minds who have recently left big tech companies to create well-funded startups dedicated to building a new-breed of digital assistants aimed at being useful for business, a golden era of digital work companions is likely to be around the corner.

Among this new crop of digital startups is Adept AI Labs. The company, which emerged from “stealth mode” earlier this year with $65 million in initial venture capital funding, stands out for its founding team. They include a group of researchers from Google Brain who in 2017 invented the A.I. architecture known as “The Transformer.” This algorithmic design has underpinned a huge number of A.I. advances, especially in natural language processing, over the past five years. Now, the team that created the Transformer thinks the same basic idea can be used to create more capable, general assistants that will be able to work alongside people to help perform a wide range of business tasks.

“The problem we’ve carved out is how to get machine to collaborate with humans and actually build things for them,” says Ashish Vaswani, Adept’s co-founder and chief scientist. Vaswani was the lead author on the paper that introduced the Transformer. He says what Adept is building is not simply a better chat bot. “We want to figure out how to get machines to perform actions for people, not just have conversations with them.”

Vaswani says the software Adept is building will learn through human feedback, not just by ingesting a lot of pre-existing data from text, which is how most large language A.I. systems are trained today. David Luan, Adept’s co-founder and CEO, says that language understanding is a key capability that Adept’s software will have to possess, since language is a major way humans provide feedback. But the system won’t just stop with language. “You can think of it as a universal teammate,” Luan says. “If you had another person on your team, what would you shamelessly ask them to do? That’s what we want this software to do.”

Adept’s first step has been creating software that can follow natural language instructions to perform tasks using other software. In a demonstration of this that Adept has posted online, its software uses a basic SQL database to perform a variety of tasks. A user types “can you grab the name and population for every country?” and the software goes ahead and pulls that data from the database and assembles it in a simple table. Then a user asks the software to “make a bar plot of that,” and the software does so. But the plot is hard to read because it contains too many countries. So the user asks it to just “to show the countries with the 6 highest populations,” and the software comes right back with a much easier to read chart. This time though the labels for the six countries are overlapping, which still isn’t great. So the user types, “Good. But the x axis is still a bit hard to read, can you fix that?” And remarkably, the software does so—by writing the labels on an angle—even though the feedback from the user was not that specific. Later in the demo, the software grabs publicly-available U.S. unemployment figures from the Internet and charts those.

This is what Luan calls teaching the software to “climb the ladder of abstraction.” Eventually, Vaswani says, he wants the software to be able to take an instruction as abstract and complex as, “tell me how my customers are churning?” and have the software analyze the data and produce a report, all without having to receive additional instructions.

Why didn’t Vaswani and his group just stay at Google and build this general assistant for the tech giant? Well, Niki Parmar, another member of the Google team who left to co-found Adept as its chief technology officer, says that at Google, A.I. research is set up to enhance existing products, not create entirely new product categories. “This is what excites us about Adept,” she says. “Here we can have both research and product together.” She says Adept plans to have a minimally viable product out with customers within months. “We are a small team that is very aligned to the mission, and we can move fast,” she says.

In addition to Adept, there are also startups such as Cohere AI, also founded by veterans of Google Brain, including researchers who worked alongside Vaswani on the Transformer, as well as alumni from Meta’s AI Research division and DeepMind. And there’s Inflection, which was co-founded by former DeepMind co-founder Mustafa Suleyman and LinkedIn co-founder Reid Hoffman. All of these companies are looking to create A.I. to assist humans at a wide variety of tasks.

It will be interesting to watch and see how capable these new digital assistants will really be, which will gain traction and for what uses, and how the major tech companies, such as Google and Microsoft, will respond to what could turn out to be a formidable threat to parts of their business.

With that, here’s the rest of this week’s news in A.I.  

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

Correction, July 7: An earlier version of this story misspelled the last name of Adept co-founder and CEO David Luan and the first name of Adept co-founder and CTO Niki Parmar.

A.I. IN THE NEWS

Researchers claim A.I. model can accurately predict crime. That is what a group from the University of Chicago said it has been able to do by creating a "digital twin" of certain cities and then training an A.I. to forecast where certain types of crime will occur. After training the model on data from Chicago from 2014 to 2016, the A.I. system was 90% accurate in forecasting where crime would occur in the weeks following the training period. The researchers said the system achieved similar results for seven other cities. Scientists not involved in the project said they were concerned that such systems could perpetuate racial bias in policing, especially as the data the system was trained on included crimes that citizens report as well as crimes that the police already proactively go out searching for. The UChicago team said that while they shared some of these concerns, their A.I. system could also be used to identify racial bias in policing. There's more here from The New Scientist.

Japan using A.I. to identify rip currents. Officials in Kanagawa prefecture, south of Tokyo, are using A.I. to identify rip currents–which cause 60% of drowning deaths—and send a warning to bathers and lifeguards, The Guardianreports. The system uses a pole mounted camera to take video of the waves at a popular surf beach and then uses A.I. to identify rip currents and anyone swimming nearby, sending alerts via a smartphone app to life guards.

Age prediction A.I. software may not be accurate, CNN finds. Reporters from the U.S. news network tested the A.I. age prediction software from London-based startup Yoti that Meta's Instagram social media platform plans to use to verify users ages and found the results varied. For "a couple of reporters," CNN said the estimated age range that Yoti's software provides was accurate, but for others it was "off by many years." In one case, it estimated that an editor who is more than 30 years old was between the ages of 17 and 21. Experts the network talked to also varied in their opinions on whether the technology was good, ethical use of A.I. or one that was problematic because it helped normalize the use of facial recognition and might not work as accurately as the companies deploying it assume.

Activist in push to convince EU to ban A.I. lie detectors. The call comes following controversial pilot tests conducted in 2019 along the borders of Greece, Hungary, and Macedonia that used A.I. from a British company called Silent Talker that claimed to be able to identify deception. Those tests though showed the technology did not work as expected and the company that made the software has since dissolved. But, according to Wired, lawyers, activists and some European Union lawmakers are calling for such lie-detection software to be explicitly banned as part of the EU's proposed Artificial Intelligence Act. 

EYE ON A.I. RESEARCH

The trade-offs between privacy, security and performance in machine learning remain unresolved. That was the takeaway from a recent research paper from a team at the Delft University of Technology in the Netherlands that looked at various approaches to privacy-preserving machine learning, where some information is shared to train an A.I. system, but the actual underlying data remains private. It turns out that most federated learning, in which only the weights used in a neural network model are shared, still have the potential to leak some data. That means someone could potentially reverse engineer the underlying data, the researchers found. But they also found that methods to further secure the data using a cryptography technique known as homomorphic encryption resulted in massive slow-downs in the time it took to train the A.I. "Our results support the fact that as our encryption system gets stronger, the performance loss is higher, making the decision of balancing security and performance a difficult but nevertheless vital issue for the developers," the researchers wrote.

FORTUNE ON A.I.

How Formula 1’s McLaren team is using A.I. to fuel performance—by Stephanie Cain

Tesla lays off about 200 Autopilot workers and closes a California office as Musk staff cuts spread—by Edward Ludlow, Dana Hull and Bloomberg

What does a dog’s nose know? A.I. may soon tell us—by Jeremy Kahn

Commentary: Quantum hacking is the next big cybersecurity threat. Here’s how companies should prepare for ‘Y2Q’—by Francois Candelon, Maxime Courtaux, Vinit Patel, and Jean-Francois Bobier

BRAIN FOOD

An A.I. learned to redistribute wealth in a way most people found more fair than a system designed by humans. That is the result from research carried out by DeepMind and published this week in the scientific journal Nature Human Behavior. The point of the research was to see if an A.I. system could learn from collective human preferences. But the mechanism chosen for the experiment was an economic game in which the A.I. system had to figure out a way to distribute contributions that each player had made to a collective investment pool in such a way that the majority of human players would vote for that distribution scheme. It turned out that the method the A.I. system was able to come up with was more popular than any that human players tried.

The researchers found that, among other failings, human players did not sufficiently reward poorer players for making relatively large contributions to the collective pot. The DeepMind team wrote that, "one remaining open question is whether people will trust AI systems to design mechanisms in place of humans. Had they known the identities of referees, players might have preferred human over agent referees simply for this reason. However, it is also true that people often trust AI systems when tasks are perceived to be too complex for human actors." 

The researchers also cautioned that people sometimes state their preferences differently when they are being asked to vote on a policy that is merely being described, rather than one they have experienced firsthand. "However, AI-designed mechanisms may not always be verbalizable, and it seems probable that behaviours observed in such case may depend on exactly the choice of description adopted by the researcher," the researchers wrote.

Finally, the researchers said their results were not an argument for "A.I. government" where autonomous agents would make policy decisions without human intervention. Instead they said they simply saw the democratic voting as an interesting way of gathering collective human feedback for an A.I. system. In fact, the team pointed out that voting itself can be problematic, with the majority potentially overriding the rights or interests of minority groups. 

About the Author
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in Newsletters

NewslettersCFO Daily
SEC chair moves to boost IPO momentum: ‘Make it cool to be a public company’
By Sheryl EstradaDecember 12, 2025
39 minutes ago
NewslettersTerm Sheet
Disney plus OpenAI: What could possibly go wrong?
By Alexei OreskovicDecember 12, 2025
2 hours ago
Disney CEO Bob Iger in Los Angeles, California on November 20, 2025.(Photo: Unique Nicole/AFP/Getty Images)
NewslettersFortune Tech
Disney and OpenAI do a deal
By Andrew NuscaDecember 12, 2025
3 hours ago
NewslettersCEO Daily
Honest Company CEO Carla Vernón on being mentored by Walmart’s Doug McMillon
By Diane BradyDecember 12, 2025
5 hours ago
Stephanie Zhan, Partner Sequoia Capital speaking on stage at Fortune Brainstorm AI San Francisco 2025.
AIEye on AI
Highlights from Fortune Brainstorm AI San Francisco
By Jeremy KahnDecember 11, 2025
16 hours ago
NewslettersMPW Daily
Lean In says there’s a growing ‘ambition gap.’ But women are still ambitious—just outside of corporate America
By Emma HinchliffeDecember 11, 2025
20 hours ago

Most Popular

placeholder alt text
Success
At 18, doctors gave him three hours to live. He played video games from his hospital bed—and now, he’s built a $10 million-a-year video game studio
By Preston ForeDecember 10, 2025
2 days ago
placeholder alt text
Success
Palantir cofounder calls elite college undergrads a ‘loser generation’ as data reveals rise in students seeking support for disabilities, like ADHD
By Preston ForeDecember 11, 2025
22 hours ago
placeholder alt text
Investing
Baby boomers have now 'gobbled up' nearly one-third of America's wealth share, and they're leaving Gen Z and millennials behind
By Sasha RogelbergDecember 8, 2025
4 days ago
placeholder alt text
Economy
‘We have not seen this rosy picture’: ADP’s chief economist warns the real economy is pretty different from Wall Street’s bullish outlook
By Eleanor PringleDecember 11, 2025
1 day ago
placeholder alt text
Uncategorized
Transforming customer support through intelligent AI operations
By Lauren ChomiukNovember 26, 2025
16 days ago
placeholder alt text
Success
What it takes to be wealthy in America: $2.3 million, Charles Schwab says
By Sydney LakeDecember 11, 2025
24 hours ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.