This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligenceand business. To get it delivered weekly to your in-box, sign up here.
William Gibson, the science fiction writer, famously said that the future is already here, it’s just not evenly distributed. Well, that’s certainly true of A.I. adoption.
Where are companies seeking employees with machine learning, data science and A.I.-related skills? Within the U.S., the jobs marketplace ZipRecruiter recently found that just four states dominate: California, Washington, New York and Massachusetts. Together, they account for 90% of all ZipRecruiter-advertised jobs that required advanced-A.I. skills, and 60% of all A.I. jobs.
Given that these states host of the headquarters and large branch offices of the FAANG companies, that’s not surprising. But, as the cost of living skyrockets in many of these tech hubs, ZipRecruiter found that the quest for A.I. talent is rapidly picking up steam in five other states: Colorado, Utah, Virginia, Texas and Arizona. The growth in A.I.-related job postings in these states over the past two years was 93%, and nearly three times faster than in the four leading locations.
ZipRecruiter also found some parts of the U.S. where companies seem to be seriously behind in terms of hosting in-house A.I. teams: Mississippi, Alaska, Kentucky, West Virginia and Louisiana. This does not necessarily mean those businesses aren’t using A.I.—but it is probably an indication that if they are doing so, they’re purchasing tech from vendors and consultants, rather than building it on their own.
That’s the U.S. For what’s happening internationally and across industry sectors, CB Insights’ latest AI 100 looked at which A.I.-related startups are doing best. The firm combines indicators of R&D strength, such as patents, with data on funding, existing customers and potential market sizes.
U.S.-based startups dominate the list, accounting for more than half of the 100 companies. Surprisingly, China, despite its A.I. ambitions and much-touted leadership in some areas such as facial recognition, had just four startups in the top 100. (That’s fewer than Canada, which had six. And the U.K. punched well above its population size, with seven.) Some very notable absences from the list: No startups from Australia or New Zealand, none from traditional tech powerhouse South Korea, and just one each from the entire continents of South America and Africa.
CB Insights’ drew from across 15 different industry sectors. It’s striking how crowded some fields were: Healthcare boasted nine of the top 100 and finance and insurance (which were combined into a single category) five. Other sectors, such as real estate and legal, each had just one company on the list.
Given the size of those sectors, I doubt that’s because the market opportunity isn’t there. Instead, it probably suggests industries that are late technology adopters in general.
If you’re thinking of creating an A.I. startup, it might be wise to stake a claim in those relatively wide-open spaces.
***
We’re aiming to make this newsletter as valuable for you as it can be. But we need a little help from you: What aspects of Eye on A.I. do you like the most? What sections do you like least? What would you like to see us do more or less of? Reply to this email and let us know.
***
Speaking of value, we hope you enjoy Fortune‘s award-winning journalism. Last week, we launched a new subscription model to support our work.
Don’t worry: Eye on A.I., along with all our other great newsletter titles, will remain free to receive in your email inbox. (You will have to subscribe to access web versions of it.) More good news: As loyal Eye on A.I. readers, you’ll get a special 50% discount to your subscription. Just follow this link here. I encourage you all to subscribe!
If you have any issues subscribing, email support@fortune.com. If you have general questions, email feedback@fortune.com.
Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com
This story has been updated to clarify that ZipRecruiter is a jobs marketplace, not a hiring firm.
A.I. IN THE NEWS
Clearview A.I. keeps sucking up data and controversy. The New York-based facial recognition startup has been trying to build a database of all U.S. police mugshots taken over the past 15 years, OneZero revealed, citing a company email the publication obtained through public records. Meanwhile, The New York Times reported that before Clearview gained notoriety for its work with law enforcement, its wealthy backers used a secret beta version of its facial recognition app as a private plaything, with one billionaire using the software to investigate the identity of his daughter's date.
London police are using facial recognition, badly. The U.K.'s Metropolitan Police released figures that show its efforts to deploy facial recognition to identify wanted suspects in crowded public spaces has not been working that well, The Register reports. On one day, its system at Oxford Circus, in central London, had an almost 88% false positive rate. On another day, the system broke entirely.
Deezer's A.I. tool could create a copyright nightmare. At least, that's the view of Pitchfork, which wrote about a machine learning tool called Spleeter that can separate each track in a piece of recorded music, essentially reverse-engineering it. The tool, which was created by the streaming service Deezer and released for free late last year, is becoming popular with DJs and others who want to sample just part of a piece of music—say the vocals or the bass line or the drum kick. Pitchfork says that ability, which is known in the music biz as "source separation," is a boon to music historians, archivists and obsessive fans who want to analyze the contribution of a single musician to a given piece. But, the publication argues, it may also cause a huge headache for artists and record labels who will find it even more difficult to guard their intellectual property.
Twitter announces a competition for an A.I. to predict which posts will get ratioed. Twitter has announced $25,000 in prize money for a contest that challenges people to develop an algorithm that can accurately predict how users will respond to a given tweet. The hope is that the winning algorithm will help Twitter develop a new recommendation system, the company said in a blog post. It has made a database of 200 million tweets and responses, all de-identified, available for contestants to train their algorithms on.
Facebook shows off A.I. used to block fake accounts. The social network revealed details of the machine learning system it built to block accounts that violate its terms of service. Facebook's method, which it calls Deep Entity Classification, or DEC, and which I wrote about for Fortune here, creates a statistical image of each users' account. The company says it is now able to remove more than 97% of fake accounts before users flag them for review, and that it has successfully taken down billions of fake accounts since deploying the system over the past two years.
EYE ON A.I. TALENT
- Western Digital Corp. has appointed David Goeckeler as its new chief executive officer. He was previously head of Cisco's network and security business.
- J.P. Morgan Chase & Co. has hired Gill Haus as head of digital technology for its consumer banking division, Bloomberg News reported. He was previously chief technology officer for enterprise products and platforms at Capital One Financial Corp.
EYE ON A.I. RESEARCH
DeepMind unveils A.I.-generated predictions of protein shapes useful in coronavirus treatment. The London-based artificial intelligence research company, owned by Alphabet, made headlines 18 months ago when its system AlphaFold proved better at predicting the shape of a protein based on its underlying amino acid sequence than any previous software method. Now DeepMind has let loose an updated version of the algorithm on the six proteins associated with SARS-CoV-2, the coronavirus that causes COVID-19.
Knowing those shapes is an important step towards developing possible drug treatments for the disease. DeepMind said it used AlphaFold to target proteins that had not previously received much attention and might point to new drug development avenues.
When AlphaFold won 2018's Critical Assessment of Structure Prediction 13 (CASP 13) competition, a big biennial contest for software that can accurately predict how a protein will fold, it did so with an accuracy of around 58% (the next best software only had an accuracy of 7%.) So AlphaFold isn't perfect—and DeepMind has warned that its folding predictions for SARS-CoV-2's proteins have not been confirmed. The company said, however, that it tried AlphaFold on one protein associated with SARS-CoV-2 whose shape has already been experimentally verified and mapped—and the algorithm accurately predicted that pattern. This gives researchers confidence AlphaFold's predictions for other proteins associated with the virus stand a good chance of being accurate as well.
FORTUNE ON A.I.
What is Clearview AI and why is it raising so many privacy red flags?—by Alyssa Newcomb
Meet the A.I. that helped Facebook remove billions of fake accounts—Jeremy Kahn
Exclusive: For $3, a ‘robot lawyer’ will sue data brokers that don’t delete your personal and location info—Jeff John Roberts
Connected vehicles will make our roads safer—but only with regulators’ help—Michael Moskowitz
Some of these stories require a subscription to access. Thank you for supporting our journalism.
BRAIN FOOD
An artist uses A.I. to prove we're all just a little bit racist. London-based video artist Karen Palmer has teamed up with computer scientists from the city's Brunel University, social scientists from New York University and developers at the research lab ThoughtWorks Arts for a video project called Perception iO that uses a tool to analyze viewers' faces for four emotions: anger, fear, surprise and calm. The project is currently on display at the Cooper Hewitt, Smithsonian Design Museum in New York City.
The film places the viewer in the role of a police officer facing a number of tense situations. The initial set up of the film is random: The viewer sees either a white or black actor portraying either someone committing a crime or suffering from a serious mental health issue. As the viewer watches the scenario, a camera pointing towards their face tracks eye movements and feeds facial expressions through the algorithm. How the narrative of the film plays out depends on where the viewer's gaze goes and which emotions they express—emotions that may reveal unconscious prejudices. Palmer wants people to confront their own biases as well as the privacy implications of increasingly ubiquitous facial and emotion-detection systems. (She has made of the source code of her emotion detection A.I. EmoPy freely available for others to use.)
The project sounds cool and I'm looking forward to checking it out if it makes it back over to London. This type of ultra-personalized filmmaking may one day go mainstream. One drawback about the technology, though: If none of us can ever see the same exact film, what will happen to our water cooler conversations about the latest Netflix series? (O.K., all you philosophy and critical-lit majors: Yes, I know we already never see the same film—we are all prisoners of our own subjective experience. But with a unified plot, it is somewhat easier to bridge those subjectivities through dialogue.)