No-code A.I. is coming. Is your company ready?
This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.
The number of “no code” A.I. platforms, software that allows people without specialized skills to build algorithms, is proliferating rapidly.
The companies that market no-code machine learning platforms include Akkio, Obviously.ai, DataRobot, Levity, Clarifai, Teachable Machines, Lobe (which Microsoft bought in 2018), Peltarion and Veritone, to name a few. They allow non-A.I. experts to create A.I. systems using simple visual interfaces or drag-and-drop menus. Some of the software is designed specifically for computer vision, some for natural language processing, and some for both.
The latest to enter the no-code fray is Primer, a San Francisco company that I’ve mentioned before in this newsletter. Primer’s evolution is worth mentioning because it is probably instructive of where the entire A.I. software-space may be headed. To date, Primer has been known as a leader in creating A.I. software that helps analysts—those who work for government intelligence agencies, as well as the kind who work for banks and for companies in departments like business development and marketing—rapidly sift through vast quantities of news and documents. To achieve this, the company has used some of the most state-of-the-art natural language processing techniques.
But as Sean Gourley, Primer’s chief executive officer, explains, as good as Primer’s natural language processing software is, many of its customers want something bespoke: “Our big Fortune 50 companies and big national security customers kept saying the models are great but can I make it do the thing I want it to do?”
Gourley says that Primer came to realize that each customer wanted to train NLP software to do slightly different things. And they also came to realize, he says, that customers would not just want to deploy a few dozen different models, but potentially thousands of pieces of A.I. software. The only way to do that, Gourley says, was to find a way to let the customer to design and create own algorithms.
So Primer has developed a no-code platform it calls Automate. It allows a non-expert to take data from something like a Microsoft Excel spreadsheet and, in about 20 minutes, train an A.I. system to perform some key NLP tasks at accuracies that can approach human-level.
The first task Primer has focused on with Automate is what’s called “named entity recognition”—identifying mentions of proper nouns in documents. That sounds simple, but it isn’t. And it is an important building block for a decision-making chain that allows Primer’s customers to do things like track terrorist activity or keep tabs on a competitor’s pricing. It can also be used to, for instance, build a tool that will let a company monitor its social media feeds for customers who need attention, says Andrea Butkovic, the product manager in charge of Automate.
Because the system works by essentially fine-tuning a powerful pre-trained A.I. algorithm for a customer’s specific needs, it can start producing good results with just about 10 to 20 examples, she says. And it is designed for what’s called “active learning,” meaning the A.I. system gets progressively more accurate with each new example it is given. This is especially true if human experts curate the examples so that they are the most instructive—exposing the system to those tricky edge cases that require human expertise to classify. “With active learning, you can need 30 times less data to get the same model performance,” Gourley says.
Primer plans to give Automate’s customers analytic tools to help them determine how good the A.I. system they’ve built is. They’ll also help them find any examples in the training data may be incorrectly labelled—a common problem that can hurt how well the software performs.
Gourley says so far Automate is good for doing binary classification tasks. But Primer plans to add the ability to do more complex document sorting in the near future, as well as tasks like figuring out the relationship between entities in documents and summarizing documents. John Bohannon, the company’s director of science, says that it also plans to introduce tools that will help users figure out which datapoints in a document were most important to the A.I. system’s classification decisions: That’s essential, he says, because it will allow users to detect problems of bias and fairness.
Gourley says that Primer is still trying to figure out exactly how it will price Automate. But he says so far it wants an annual license to use the system to cost about a third of what it would have cost a customer to hire a machine learning engineer.
Whether that’s enough to make Primer’s Automate competitive is unclear: Some competing no-code A.I. platforms cost a fraction of that. Obviously.ai for instance costs just $145 per month. Akkio starts at $500 per month for a version for small-to-medium-sized businesses but cost more for a license suitable for a larger corporation. That’s the kind of pricing that is likely to make A.I. really ubiquitous.
There’s another issue raised by the proliferation of powerful no-code A.I. software: control. Empowering every employee to build and train A.I. algorithms sounds great in theory, with the potential to transform businesses in ways managers can’t even imagine. But, at the same time, when a company is running thousands of A.I. models, it becomes very hard to keep track of what they are all doing and to avoid ethical, data privacy or governance pitfalls. The rise of no-code A.I. makes it imperative that companies develop strong policies around the use of A.I. and have systems in place to ensure everyone using the no-code software understands those policies. Companies will need more training in topics like data bias and fairness, and the ability to audit how these systems have been trained. No-code A.I. is like a unbottled genie: It can do amazing things, but you need to be careful what you wish for.
With that, here’s the rest of this week’s A.I. news.
A.I. IN THE NEWS
Twitter cracks down on A.I. bots supporting Amazon in its anti-union stance. Twitter has banned a number of seemingly fake accounts that may have been part of a bot army created by e-commerce giant Amazon, or perhaps someone in its employ, as part of the company's aggressive efforts to defeat a unionization drive at the company's warehouses. The company says it has nothing to do with the bots. But, according to tech publication The Register, all the fake accounts had names that started with "Amazon FC," using an acronym that is often used to mean "fulfillment center," which is what Amazon calls its warehouses, followed by a first name, and all claimed to be Amazon workers; they all followed one another and tweeted similar statements in support of the company and against the union. What's more, their profile picture appeared to have been generated using deepfake technology—the A.I. technique that can generate highly-convincing fake still images or videos of people's faces. Amazon has been in hot water lately for its belligerent social media posts defending its activities and attacking critics, with its own public relations executives now admitting the company had gone too far. The company has also been cited by the National Labor Relations Board for illegally firing two workers who had urged the company to do more on climate change and working conditions for its warehouse employees. The company says it did not fire the two women for talking publicly about working conditions, safety or sustainability at the company but because they violated internal company policies, which it says are lawful.
Volvo teams up with Aurora on self-driving. The Swedish car maker is partnering with self-driving startup Aurora to create a new line of autonomous big-rig trucks for the North American market, according to a story in tech publication The Verge. Aurora has been working on autonomous trucks and acquired most of Uber's former self-driving employees and assets when the ride-sharing company abandoned its self-driving effort last year.
Scientist seek to highlight problems with emotion-recognition A.I. through an online game. A group of researchers has created an online game called emojify.info that lets the public play around with an A.I. system that has been trained to try to recognize emotions, awarding them points if they are able to trick the system by pulling faces or fool it into misidentifying an emotion in a particular context. According to a story in The Guardian, the idea of the game is to show the public how fallible these systems are and raise awareness of why deploying them may, in many cases, not be such a good idea.
Come as you A.I.re? A Canadian mental health non-profit has used a Google A.I. system called Magenta to analyze the songs of grunge-legend Nirvana and invent a new song in the same style, with the machine learning system generating all of the music, although the vocals are performed by a singer from a Nirvana cover band. Over the Bridge, the Toronto-based charity, created the "new Nirvana track," called "Drowned in the Sun," as part of its The Lost Tapes of the 27 Club project. The campaign pays tribute to prominent musicians who died at the age of 27 in part due to mental health issues or addiction, including Nirvana lead singer Kurt Cobain, Amy Winehouse, and Jimi Hendrix. The idea is to show people how much has been lost by those singers' untimely deaths by using A.I. to give them a glimpse of what those artists might have been able to continue creating had they lived longer, according a story in the tech and entertainment publication Unilad.
EYE ON A.I. TALENT
Waymo, the self-driving car company owned by Google, has named Dmitri Dolgov and Tekedra Mawakana as co-CEOs, the company announced in a blog. Dolgov has been Waymo's chief operating officer and Mawakana has been its chief technology officer. The two replace John Krafcik, who is stepping down from the top spot at the company.
Curai, health technology company based in Palo Alto, California, has hired Li Deng to be its chief scientist. Deng was previously the chief A.I. officer and head of machine learning at Citadel and the chief scientist of A.I. at Microsoft.
Don Box is stepping down from his position at Microsoft as director of engineering for the company’s mixed reality business unit, which encompasses the HoloLens device, tech publication ZDNet reported. Box, a long-time respected technologist, did not reveal where he is going.
Ursula Burns, the former chairman and CEO of Xerox, has joined the board of the enterprise software company Icertis, which incorporates machine learning to help automate tasks related to contract management, among other uses, according to a company release.
EYE ON A.I. RESEARCH
Robots are getting better at going from simulation to the real world. One of the most promising ways to train A.I. systems is reinforcement learning, where software learns from its own experience, by trial and error, in a simulator. But one problem with this method has been the difficulty of safely transferring the skills learned in simulation to the real world. It turns out that even very subtle differences can sometimes confound A.I. software trained in this way. But scientists are getting progressively better at making it actually work. The latest example comes from the University of California at Berkeley, where researchers were able to take a bipedal robot named "Cassie" and teach it to walk in a simulator—and then get it to actually walk for real. The technique the scientists used is also a good example of a hybrid approach to A.I.— it used some reinforcement learning, but the software didn't have unlimited choices in the simulator. Instead it could select from among a library of pre-designed walking techniques. It is likely that these techniques may lead to rapid advances in the kinds of robots that may soon be deployed in factories, warehouses and other industrial settings. You can see a video fo Cassie strutting her stuff here. And you can read the research paper, which was published on the non-peer reviewed research repository arxiv.org, here.
FORTUNE ON A.I.
Commentary: How A.I.-powered companies dodged the worst damage from COVID—by Francois Candelon
Apple CEO Tim Cook talks self-driving cars and retirement—by Jonathan Vanian
Can A.I. help Facebook cure its disinformation problem?—by Jeremy Kahn
Programming language experts win ‘Nobel Prize of computing—by Jeremy Kahn
The robot made me do it! One of the thorniest dilemmas as A.I. becomes more capable and more ubiquitous advising humans what to do is when humans will know to trust those suggestions and when to trust their own intuition and judgment. In the past, I have noted the disturbing tendency of people to defer to the machine, even when they ought to know better. The latest example was highlighted this past week in The Wall Street Journal picking up on research published in November. It shows that people were far more likely to engage in risky behavior—often against their own better judgement—when a robot egged them on.
The experiment, which used a common lab set up to judge risk-taking behavior, involved students using a piece of software to gradually pump air into a balloon. For each pump, the student earned a small cash reward. But if the balloon burst, the student got nothing. "The researchers found that students who took the test while in the presence of the talking robot were more likely to engage in risk-taking behavior," according to the Journal. "They were, for example, 20% more likely to keep pumping the balloon than the control group, who took the test without the robot present, and nearly 40% more likely to pop the balloon than the control group."
“Receiving direct encouragement from the robot overrode participants’ direct experiences and feedback,” says Yaniv Hanoch, an associate professor at Southampton Business School in England and one of the paper’s co-authors.In fact, after the balloon popped, the group that kept receiving encouragement from the robots didn’t change their behavior with subsequent balloons, while the students who took the test without the robot’s encouragement reduced the number of times they pumped the next balloon, likely learning from the negative outcome.
While this was just a lab experiment, it doesn't bode well for our robot- and A.I.-mediated futures.