Good morning, readers. Fortune legal writer Jeff Roberts here filling in for Jonathan, who is a proud new papa. While he’s on leave, you’ll be hearing from Jeremy and me.
While AI is not something I write about on a regular basis, I’ve been surprised at how much it has become part of law and the legal profession in recent years.
The number of patent applications that concern AI, for instance, has roughly doubled from 30,000 to 60,000 in the past 15 years, and AI-related inventions now account for 15% of overall applications. Meanwhile, some intellectual property scholars are wondering whether to recognize machines as patent or copyright owners.
The legal issue is serious enough that agencies from the U.S. Copyright Office to the UK Intellectual Property Office have arranged public consultations to find an answer. Plus, a Missouri scientist is suing the Patent Office for refusing to acknowledge the role his AI system allegedly had in discovering an invention.
For now, authorities have been reluctant to award IP rights to non-human owners, but it feels like a matter of time until a jurisdiction somewhere in the world takes this leap, especially as AI takes on a greater role in writing software code that generates creative works.
The debate over AI’s role in law goes into more fundamental issues of justice. A growing number of companies provide tools that claim to anticipate how courts will rule in a given case. The process involves asking software to assess a host of factors—from venue to precedent to the judge in the case—in order to predict a ruling, which in turn affects a party’s choice whether to litigate or settle. I’ll leave it to experts to decide whether this technically counts as “AI”, but it’s clearly another example of machines challenging human decision-making. The phenomenon is especially notable given how judges (in theory, at least) are supposed to be among the wisest people in our society.
All of this raises the question of whether AI will see judges and lawyers replaced with algorithms—an outcome that could make the justice system less expensive and possibly more fair. For now, the prospect seems unlikely. In May, for instance, the buzzy startup Atrium, which had raised $75 million to “revolutionize” law firms with fancy software, quietly shut down.
The jury is still out, as they say. Thanks for reading—more AI news below.
Jeff John Roberts
@jeffjohnroberts
jeff.roberts@fortune.com
A.I. IN THE NEWS
Fix AI lending with...more AI: The finance industry has struggled to use AI in assessing who is eligible for loans—too often systems are built on data suffused with historic racism. An HBR author proposes building training models “not merely on the loans or mortgages issued in the past, but instead on how the money should have been lent in a more equitable world.”
Show us the code: AI is suffering from a replication crisis. Academics are fed up with companies like Google publishing flashy research without sharing the data that underlies it. “It’s more an advertisement for cool technology” than science, says one researcher who, along with 30 others, called on the journal Nature to require contributors to disclose the source of their findings.
Stand down, slaughterbot: Fears of warfare involving autonomous “slaughterbot” machines are likely overstated, says an Axios report on military use of AI. Even though drones proved decisive in the recent Armenia-Azerbaijan conflict, the biggest role for AI in the future is likely to be speeding human communication and decision-making rather than making Terminator-style machines.
Everything in moderation: Facebook provided more details about how it's using AI to police its troubled platform. The tech giant says machine learning now forms a bigger part of its moderation efforts, but acknowledges there are limits. “The system is about marrying AI and human reviewers to make less total mistakes. The AI is never going to be perfect,” says a Facebook engineer.
FORTUNE ON A.I.
A.I. recruiting startup wins competition to help military veterans find jobs—By Jeremy Kahn
The rise of the MOOCs: How Coursera is retraining the American workforce for a post-COVID economy—By Beth Kowitt
Apple’s user tracking prompts privacy complaint from Facebook nemesis Max Schrems—By David Meyer
He’s worried A.I. may destroy humanity. Just don’t confuse him with Elon Musk—By Jeremy Kahn
BRAIN FOOD
Catch me if you can: Remember the Pokémon craze of 2016, when half the world was stumbling down sidewalks and back alleys trying to capture the digital creatures on their phone? That cooled off, but Pokémons are still with us—and, thanks to AI, now number more than ever.
Programmer Matthew Rayfield makes digital toys, and recently used his own code and OpenAI's Generative Pretrained Transformer 2 (GPT-2) to create 3000 new little beasties, some of which could pass as close to real Pokémon.
From a Vice article explaining the project:
“The result was 100,000 lines worth of sprites. He used that text to re-train GPT-2, which output a random text-based sprite, and then reverse-engineer the line version into a colored-in image. What comes out are garbled little pixel creatures that, if you use your imagination, are pretty close to Pokémon.”
It’s a cool project, but I’m waiting to see if Rayfield can figure out to how make legions of new Transformers.