CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

The best A.I. conference ever—and you’re invited

November 1, 2022, 5:09 PM UTC
James Manyika, Google's head of tech and society
James Manyika, Google's head of tech and society, will be among those sharing their wisdom at Fortune's Brainstorm A.I. conference in San Francisco on December 5th and 6th.
David Paul Morris—Bloomberg/Getty Images

Hello dear readers,

I want to thank Alexei Oreskovic and Kevin Kelleher for filling in for me while I was away.

There’s a lot happening in A.I. And while this newsletter aims to bring you the most important updates for a business reader each week, sometimes it’s helpful to both step back and dive deeper. That’s exactly what Fortune’s upcoming Brainstorm A.I. conference will enable you to do.

Taking place in-person in San Francisco on December 5th and 6th, Brainstorm A.I. will help illuminate the most immediate opportunities for companies hoping to use A.I. to transform their businesses, as well as highlighting some of the most pressing challenges. I’ll be there as one of the conference co-chairs and I hope that you will consider joining me. As an Eye on A.I. reader, I am pleased to offer you a special discount rate that will give you 20% off the normal registration fee. Use code EOAI in the additional comments section of the registration form.

We have an amazing lineup of speakers for you. These include top executives from Meta, Google, Nvidia, Wayfair, Microsoft, Apple, Land o Lakes, and Capital One. Senior business leaders from Walmart, eBay, and Expedia will talk about how A.I. is supercharging their operations.

We have a clutch of A.I. luminaries who will tell you where A.I. is heading and what you need to do to ensure you can build a winning strategy around the technology. These include Fei Fei Li, the co-director of Stanford University’s Institute for Human-Centered A.I., giving the opening keynote on “A.I.’s Human Factor, and Andrew Ng, the founder of Landing A.I., who will tell us about the critical shift companies are making from Big Data to Good Data.

Kevin Scott, Microsoft’s Chief Technology Officer, will discuss the advent of large language models and their impact on business. Robotics expert Pieter Abbeel will talk about how robots are poised to reshape the workforce. Joelle Pineau from Meta’s A.I. Research Lab will detail key lessons the social media giant has learned about how to use A.I. effectively. Colin Murdoch, DeepMind’s chief business officer, will reveal how the cutting-edge research lab turns scientific breakthroughs into real business ideas for Google.

This will be your chance to interact with some of the top A.I. leaders in the world and get answers to your pressing questions on how to use this powerful emerging technology in your own organization. How can you use A.I. to increase revenue and profits? How should A.I. be governed? How do you use A.I. ethically and responsibly? How can A.I. improve supply chain management? How can it transform the retail experience? We’ll cover all of this and more. Plus, there will be plenty of time for networking and sharing experiences with one another.

If you’re interested—and hopefully you are—please apply to attend here (click the red “Register Now” button at the top of the page). (And again remember to use Code EOAI in the additional comments field of the application to receive your special Eye on A.I. reader discount!)

And now here’s the rest of this week’s A.I. news.


Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

British data regulator warns on “emotion” recognition technology. The U.K.’s Information Commissioner’s Office has issued a warning that companies should avoid using A.I. that claims to be able to recognize people’s emotions based on facial expressions, saying there’s little scientific evidence to justify the technology’s claims. Companies that ignore the warning and use emotion recognition software for important decisions—such as screening job applicants or detecting fraud—could face fines, Stephen Bonner, ICO’s deputy commissioner, told The Guardian.

Generative A.I. startups are raking in venture capital dollars. That’s according to a story in The Wall Street Journal, which is pegged to a $125 million Series A fundraising round for Jasper, a startup based in Austin, Texas, that uses A.I. to autogenerate blog and marketing copy. (Some of the back-end magic of Jasper is actually handled by OpenAI’s GPT and DALL-E generative models.) The fundraise valued Jasper at more than $1 billion, according to the paper. It also mentioned Stability AI, which recently raised $101 million seed round for its image generation technology, as well as Replikr, Musico, and GoCharlie.AI, as generative A.I. startups that have seen recent interest from investors.

Biotech startup begins human clinical trials of ALS drug discovered with A.I. Verge Genomics, which is backed by pharma giants Eli Lilly and Merck, as well as private equity group Blackrock, has begun human clinical trials of a drug for the neurodegenerative disease amyotrophic lateral sclerosis (ALS, sometimes known as “Lou Gehrig’s Disease”) that was discovered with the help of A.I, The Financial Times reported. The company used machine learning to analyze millions of datapoints and find a new causal mechanism implicated in ALS, which it then figured out how to target with the new drug it is trialing. Verge said using A.I. shortened initial research and testing period for the drug prior to human clinical trials to four years, about half the time it might normally take for this phase of research. As the FT points out, several other A.I.-assisted drug discovery companies also have at least one drug now in human clinical trials, including Exscientia, Insilico Medicine, and Evotec. One of Insilico’s drug candidates is also for ALS.

EYE ON A.I. TALENT

BigBear.ai, a Columbia, Maryland-based A.I. software company, has hired Amanda Long as its chief executive officer, publication Washington Technology reports. Long had been a vice president in charge of IT automation at IBM.

EYE ON A.I. RESEARCH

DALL-E makes cool images, but it doesn’t actually “understand” language. Generative A.I. systems like OpenAI’s DALL-E 2 that can be instructed in natural language to generate images are all the rage. But it can be easy to over-estimate how smart these A.I. models really are. Evelina Leivada, from the Universitat Rovira i Virgili in Tarragona, Spain, Elliot Murphy, from the University of Texas Health Science Center in Houston, and deep learning skeptic Gary Marcus, from New York University, collaborated on a paper in which they demonstrated that DALL-E 2 can’t properly understand a lot of natural language prompts that involve things such as “binding principles and coreference, passives, word order, coordination, comparatives, negation, ellipsis, and structural ambiguity.” The researchers point out that very young children can master these issues in language and yet DALL-E 2, despite being trained on billions of images and captions, cannot.

For instance, prompt DALL-E with “the dog is chasing the man,” and at least some of the images show a person behind the dog and are very similar to the images generated by “the man is chasing the dog” (which also don’t reliably show the person behind the dog). Prompt DALL-E with “the vase was broken by the woman,” and DALL-E generates plenty of images that show what appears to be a perfectly intact vase that just happens to be positioned next to a woman. Comparative prompts such as “the bowl has more cucumbers than strawberries” often generated images where there were clearly more strawberries present than cucumbers. There are plenty more examples in the paper, which can be found on the non-peer reviewed research repository arxiv.org here.

The researchers conclude: “In our view, all the recent attention that has been placed on predicting sequences of words has come at the expense of developing a theory of how such processes might culminate in cognitive models of the world, and how syntax serves to regulate form-meaning mappings. For example, a recent account claims that language models represent ‘conceptual role’ meanings because these can be inferred from relationships between internal representational states. Our results shows that such representations, to the extent that they exist at all, do not suffice.

 

FORTUNE ON A.I.

Ford and Volkswagen pull the plug on robocar unit Argo AI in major setback to their self-driving plans—by Keith Naughton, Monica Raymunt and Bloomberg

3 reasons why Intel’s Mobileye IPO flopped—by Christiaan Hetzner

Commentary: How digital twin technology can bridge America’s chip manufacturing gap—by Chris Rust

Over-the-counter sales may usher in a boom time for A.I.-based hearing aids—Kevin Kelleher

BRAINFOOD

Dr. Doolittle here we come. Well, at least that’s what some of the more hyped headlines about using A.I. to “talk to the animals” would have you believe. The reality is, as usual, more complicated—and less in the realm of Dr. Doolittle—if no less exciting for science. Biologists are using machine learning and new sensor technology to analyze animal communication—everything from the dances of bees to the very low-frequency infrasounds that elephants can make—and try to interpret it. Karen Bakker, a researcher at the University of British Columbia, has a new book out The Sounds of Life: How Digital Technology is Bringing Us Closer to the Worlds of Animals and Plants that chronicles many of these efforts. In an interview this week with Vox, Bakker pointed at that the technology is actually allowing us to begin communicating back to animals in a few limited instances. She notes an experiment where scientists in Germany who were studying honeybees used a bee-sized and shaped robot to mimic the waggles that worker bees use to signal to other bees the location of good flowers for gathering nectar.

As Bakker tells Vox: We can use artificial intelligence-enabled robots to speak animal languages and essentially breach the barrier of interspecies communication. Researchers are doing this in a very rudimentary way with honeybees and dolphins and to some extent with elephants. Now, this raises a very serious ethical question, because the ability to speak to other species sounds intriguing and fascinating, but it could be used either to create a deeper sense of kinship, or a sense of dominion and manipulative ability to domesticate wild species that we’ve never as humans been able to previously control.

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.