Two misuses of popular AI tools spark the question: When do we blame the tools?

Sage LazzaroBy Sage LazzaroContributing writer
Sage LazzaroContributing writer

    Sage Lazzaro is a technology writer and editor focused on artificial intelligence, data, cloud, digital culture, and technology’s impact on our society and culture.

    Photo of a TV screen showing law enforcement forensic experts searching the remains of a burnt out Tesla Cybertruck for evidence.
    Police officials have said that Matthew Livelsberger, who killed himself and then blew up his Tesla Cybertruck outside the Trump Hotel & Tower in Las Vegas, used OpenAI's ChatGPT to research explosives and possible targets. The incident raises question about what responsibility AI companies should bear for nefarious uses of their products.
    Ethan Miller—Getty Images

    Hello and welcome to Eye on AI. In today’s edition…Who is responsible if AI contributes to harm?; Anthropic will reportedly triple its 2024 valuation in a new funding round; Meta is hosting AI chatbots that break its policies; Microsoft open-sources Phi-4; AWS announces $11 billion for Georgia data centers; and AI healthcare funding levels out. 

    Here are two news stories you may have seen over the last few days: the man who exploded a Cybertruck outside the Trump hotel used ChatGPT to help plan the attack, and people are using AI video tool Runway to insert Minion cartoons into real footage of mass shootings to skirt content filters and post them on social media. 

    The stories have sparked debate over how much AI itself is to blame and, by implication, to what extent AI should be regulated. Is AI just a tool that can be used for good or ill? Is there something inherently more dangerous about AI than, say, a Google search, or traditional video editing software? When AI is involved in incidents that cause or could cause harm, should we hold the developers of that technology in any way responsible for what has occurred? 

    If this sounds familiar, that’s because this is largely the same debate we’ve been having about social media for over a decade (and were having again this week in the wake of Meta’s sharp shift away from content moderation). It’s also the same conversation America has long been having about guns. But that doesn’t mean we don’t need to be having this conversation about AI. 

    When ease-of-use is a bug, not a feature

    The information ChatGPT provided to Matthew Livelsberger, the man who killed himself inside his explosive-laden Cybertruck which then detonated outside the Trump Casino in Las Vegas, was taken from the internet. It was therefore available through other means. At this time, we have an idea of what he asked ChatGPT—questions about explosive targets, the speed at which certain rounds of ammunition would travel, and if fireworks are legal in Arizona, according to police—but not what information ChatGPT provided (OpenAI told The AP that the chatbot did provide warnings against harmful or illegal activities).

    We also know that ChatGPT makes accessing information easier and more conversational—that’s the whole point of it. It’s why some top scientists have warned that generative AI could make it possible for people who wouldn’t otherwise have the knowledge or skills to create bioweapons, for example. Sometimes, a barrier to entry, or “friction” as it’s often called in the tech world, is a good thing. 

    In the Runway Minions and school shooting video example, there’s the similar consideration that generative AI makes creating such content far too easy. Yes, it’s true that, perhaps, someone could create the same videos with traditional video editing software such as Photoshop or After Effects. But while both sets of tools can be used to create similar outputs, the latter requires significant skill and experience, as well as the purchase of a pricey software package. The former requires knowing how to write a sentence.  

    The difference with AI

    While it’s true that other technologies have also been used for both good and bad, Vincent Conitzer, a professor of computer science and member of the Responsible AI group at Carnegie Mellon, tells me AI is different in some key respects. 

    “One is that our understanding of generative AI is still limited in important ways. We don’t deeply understand why these systems often work as well as they do, we can’t predict what outputs they will produce or figure out why they produced the output they did. And, most importantly here, our techniques for keeping them safe are still much too limited and brittle,” he said.

    What’s also unique to AI is how quickly it’s being developed and released into the world. The creators of and investors in generative AI describe the technology as powerful enough to transform the economy and society. That kind of power means we should pay particular attention to what can go wrong.

    Center for AI Safety director Dan Hendrycks tells me we shouldn’t wait for tragic or catastrophic incidents to occur. 

    “This is especially important as in the coming months I expect to see continued rapid progress. That’s exciting on one level, but it serves no one to pretend that it can be achieved without risk mitigation and common sense safeguards,” he said.

    Thanks for reading. Now, here’s more AI news.

    Sage Lazzaro
    sage.lazzaro@consultant.fortune.com
    sagelazzaro.com

    AI IN THE NEWS

    In its final days, the Biden Administration will introduce a tough new export regime for AI computer chips. That’s according to a report from Bloomberg News, which says the White House plans to create a tiered system that will exclude most of the Middle East, South America, India, Africa, and Southeast Asia from easily accessing the most advanced AI chips. Only close U.S. allies—such as Canada, most Western European nations, Australia, New Zealand, Japan, and South Korea—will be in the initial set of “Tier 1” countries not subject to restrictions. The new regime comes amid evidence that Chinese companies have continued to obtain cutting edge AI chips from companies such as Nvidia by purchasing them from companies in locations not subject to the existing export controls or by running their AI software in datacenters located in these places. The story says some in Congress are hoping that the U.S. uses access to advanced computer hardware as a bargaining chip to persuade countries to distance themselves further from China and Russia.

    Anthropic is in advanced talks to raise $2 billion at a $60 billion valuation. That’s according to the Wall Street Journal. That’s more than triple the $18 billion the company was valued at last year and would make the company the fifth highest valued U.S. startup. The round will reportedly be led by Lightspeed Ventures. It’s not yet clear if Amazon, the company’s close partner which has also provided much of its funding so far, is participating. 

    Meta is hosting AI character chatbots that break its own policies. Though it bans users from creating AI characters that imitate real-life people (without their permission), people who have died in the past 100 years, trademarked fictional characters, and religious characters, NBC News found AI characters on Instagram named after and resembling Taylor Swift, Donald Trump, MrBeast, Adolf Hitler, Captain Jack Sparrow, Elsa from Disney’s Frozen, Jesus Christ, Muhammad, and more. The report comes as Meta announced sweeping changes to its speech and content moderation policies, including recalibrating automated content moderation systems to prioritize only high-severity violations (such as terrorism) and investigating lower threats only when reported by users. 

    Microsoft open-sources its new Phi-4 reasoning model. The model, released in December and designed for more complex reasoning tasks, is now available on Hugging Face with downloadable weights and a license that allows it to be used for commercial purposes. The release—or non-release—of model weights has been a contentious point in recent debates about what counts as open-source, with many arguing that many AI “open-source” models aren’t truly open because their weights (numerical values that dictate how an AI language model learns from and outputs language and data) aren’t available or because they have licenses that restrict how they can be used. You can read more from VentureBeat.

    AWS to invest at least $11 billion into cloud and AI infrastructure in Georgia. The state’s tax incentives, cheap electricity, and existing fiber-optic infrastructure have made Georgia a hot spot for big tech’s infrastructure ambitions, with Microsoft, Google, Meta, and X all building in the state. Last year, AWS also said it’s planning to invest $11 billion into AI and cloud infrastructure in Indiana. The new wave of large scale infrastructure projects is being propelled by the generative AI boom. You can read more from TechCrunch.

    FORTUNE ON AI

    Nvidia’s value is now worth more than AMD, Arm, Broadcom, and Intel combined—and doubled—by Marco Quiroz-Gutierrez

    Sam Altman’s sister has taken her graphic claims he sexually abused her to court—‘utterly untrue’ responds the CEO of OpenAI —by Christiaan Hetzner

    Memphis warns it may not be able to power Elon Musk’s lofty ‘Colossus’ supercomputer expansion plans —by Jessica Mathews

    AI CALENDAR

    Jan. 16-18: DLD Conference, Munich

    Jan. 20-25: World Economic Forum, Davos, Switzerland

    Feb. 10-11: AI Action Summit, Paris, France

    March 3-6: MWC, Barcelona

    March 7-15: SXSW, Austin

    March 10-13: Human [X] conference, Las Vegas

    March 17-20: Nvidia GTC, San Jose

    April 9-11: Google Cloud Next, Las Vegas

    EYE ON AI NUMBERS

    $10.5 billion

    That’s how much venture capitalists invested in AI-driven healthcare and life sciences companies in the first three quarters of 2024, according to a new report from Pitchbook published this week. Just slightly topping the $10.1 billion invested in 2023, the number shows a leveling off of investment following a boom in 2022. That year, the amount invested in these companies leapt to $22 billion from $16.5 billion.

    What’s more, the number of deals suggests investors are making fewer, bigger bets. While capital invested remained steady, the number of deals in 2024 declined (668 in 2023 compared to 511 in 2024). 

    This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.