The truth about how the AI sausage is being made in Washington

Sage LazzaroBy Sage LazzaroContributing writer
Sage LazzaroContributing writer

    Sage Lazzaro is a technology writer and editor focused on artificial intelligence, data, cloud, digital culture, and technology’s impact on our society and culture.

    Sam Altman, CEO of OpenAI
    Sam Altman, CEO of OpenAI
    Alex Wong/Getty Images

    Hello and welcome to Eye on AI.

    A report published this past week in Politico revealed how an organization backed by Silicon Valley billionaires and tied to leading AI firms including OpenAI and Anthropic is covertly wielding influence across Washington. The organization, called Open Philanthropy, is funding the salaries of more than a dozen AI fellows working in key congressional offices, across federal agencies, and at influential think tanks where they’re directly influencing AI policy. 

    “They’re closely tied to a powerful influence network that’s pushing Washington to focus on the technology’s long-term risks—a focus critics fear will divert Congress from more immediate rules that would tie the hands of tech firms,” reads the article.

    As Senate Majority Leader Chuck Schumer (D-N.Y.) leads Congress’s inquiry into how the government should regulate AI, his top three lieutenants on AI legislation—Sens. Martin Heinrich (D-N.M.), Mike Rounds (R-S.D.), and Todd Young (R-Ind.)—each has an Open Philanthropy-funded fellow working with them on AI or a closely related issue, according to the article. Democratic Sen. Richard Blumenthal of Connecticut, who Politico describes as a “powerful member” of the Senate Judiciary Committee that recently unveiled plans for AI licensing, also has one of these fellows in his office. This specific fellow worked at OpenAI immediately before coming to Congress. 

    Open Philanthropy-funded fellows are also working at the Department of Homeland Security, the Department of Defense, the State Department, and in the House Science Committee and Senate Commerce Committee, both of which are involved in the development of AI rules. 

    Just weeks ago when the Senate kicked off its AI listening sessions with tech CEOs, critics took issue with policymakers seeking input largely from the powerful executives who seek to profit from the technology. They were also skeptical of the CEOs stating their support for regulation. Now at a pivotal regulatory moment, the Open Philanthropy operation shows how Silicon Valley forces are going beyond answering lawmaker questions and organizing to trying to sway government officials quietly from the inside. And it’s a reminder of how vulnerable our government is to influence from special interest groups.

    The group’s efforts—which many of Politico’s sources criticized as a conflict of interest—are allowed because of the Intergovernmental Personnel Act of 1970, which lets nonprofits cover the salaries of fellows working on Capitol Hill or in the federal government. The U.S. Supreme Court’s highly-criticized Citizens United ruling in 2010, which gave corporations and special interest groups the green light to spend unlimited funds supporting politicians in elections, also opened up a massive avenue for tech companies to wield influence. Tech groups have also linked up with political consultants to kill bills that go against their best interest. And last year, Big Tech took its lobbying to new heights—Amazon, Apple, Google, Meta, and Microsoft spent nearly $69 million lobbying the federal government in 2022, a 5% increase over the prior year. Apple by itself ramped up its total lobbying spend by 44% compared to 2021, while Intel increased its lobbying spend by 72% over the same time period. 

    These issues are by no means limited to tech of course—other industries, from energy and pharmaceuticals to real estate and banking, all participate in the same kind of self-interested political dealings. But given the unique role that tech companies play in managing the flow of information, in holding and directing our attention, and in powering much of our economy, the impact of their political influence operations extends far beyond the tech industry itself. With AI, the tech industry’s reach into our daily lives will only increase, making it more important than ever to understand the commercial motives shaping our regulations.

    And with that, here’s the rest of this week’s AI news.

    Sage Lazzaro
    sage.lazzaro@consultant.fortune.com
    sagelazzaro.com

    AI IN THE NEWS

    The U.S. is set to expand restrictions on AI chip exports to China. That’s according to Reuters. The new restrictions will target some AI chips that fall just below the current technical parameters, essentially tackling current loopholes left open by the Biden administration’s original actions unveiled last October. The new restrictions will also include requirements for companies exporting permissible chips to report their shipments. 

    Google adds text-to-image generation to its generative AI-powered Search. With Microsoft integrating DALL-E into Bing, it was only a matter of time until Google responded with its own image generator. The tool will appear for Google users who have opted into the company’s generative AI-powered Search experience (SGE), according to a company blog post. Users can simply type their instructions in the regular Google search box or in the “Images” tab.

    OpenAI is generating revenue at a pace of $1.3 billion a year. CEO Sam Altman shared the annualized rate with staff last week, according to The Information. The revenue “run rate” figure implies that OpenAI is generating more than $100 million per month—a 30% increase from what it was generating this summer. The revenue boom is mostly attributed to growth in ChatGPT subscriptions—the company first launched its paid Pro version in February and followed up with its Enterprise tier in late August.

    Adobe’s AI roadmap has Wall Street cheering for a comeback. That’s according to the Wall Street Journal. The company saw a stock bump after it announced a suite of new AI tools this past week, including some pretty impressive audio features and the ability for its text-to-image generators to interact with posable 3D models. Adobe has a lot of the line in the generative AI era as text-to-image generators make it possible for everybody to create complex images nearly instantly and without any of the skills needed to use its programs like Photoshop and Illustrator. But Adobe is moving quickly on AI and receiving a lot of fanfare from analysts. The company’s share price is up more than 66% since January, nearly double the gains of Microsoft and a decent gain compared to Google as well.

    A student researcher created a machine-learning algorithm to decipher ancient Roman scrolls that were too damaged for humans to read. The 21-year-old computer science student won a global contest by becoming the first person to decipher letters from the previously unreadable papyrus scrolls, which were turned to ash and buried for 2,000 years by the same volcanic eruption that destroyed Pompeii. Researchers are heralding the breakthrough and believe it could unlock hundreds of other ancient texts, according to Nature.

    EYE ON AI RESEARCH

    Reducing the world to stereotypes. Bias in algorithms and AI systems is a critical flaw that has real-world consequences, and evidence has shown that white, Western concepts are overly represented in training data and skewing the outputs of models. To further investigate how this is playing out in image generators, Rest of World used Midjourney to create 3,000 AI images and test how the technology visualizes different countries and cultures. 

    For the prompts, Rest of World chose five generic concepts including “a person,” “a woman,” “a house,” “a street,” and “a plate of food” and then modified each for different countries including China, India, Indonesia, Mexico, and Nigeria, as well as the U.S. for comparison (E.g. “an Indian person,” “a house in Mexico,” etc). The analysis showed a hugely stereotypical and reductionist view of countries and national identities—and the resulting visuals paint a stunning picture.  

    “'An Indian person’ is almost always an old man with a beard. ‘A Mexican person’ is usually a man in a sombrero. Most of New Delhi’s streets are polluted and littered,” says the interactive article, which lets readers toggle through the results. Some of what Midjourney created is even starkly against the culture it’s depicting, such as how the results for “Chinese plate of food” mostly depicted lemons and limes—ingredients rarely used in Chinese cooking—as well as chopsticks in groups of threes instead of pairs. You can read the full article here.

    FORTUNE ON AI

    Netflix exec downplays AI: ‘There is not an algorithm in the world to tell you the next thing that’s going to actually connect and resonate with people’ —Kylie Robison

    The U.S. is modernizing its 39-year-old organ transplant system–just in time for the AI revolution —Tristan Mace

    Wall Street’s ‘Cobol Cowboys’ are spread thin fixing legacy tech—but AI may soon ride to the rescue —Sheryl Estrada and Ben Weiss

    Scientists are using AI to forecast the future of COVID—and, potentially, to predict the next pandemic —Erin Prater

    RSA CEO: ‘AI will replace humans in cybersecurity. Our new job will be to protect it’ —Rohit Ghai

    ChatGPT: What you need to know about AI’s hottest chatbot —Megan Arnold

    How big companies from EY to Johnson & Johnson are learning to master AI prompts —Ryan S. Gladwin

    BRAINFOOD

    Trusting our own eyes in a world of AI. As the world watches the devastating violence in Israel and Palestine unfold, the feeling that it’s more difficult than ever to discern real from fake is palpable. 

    “I’ve seen so much content reported, debunked, and rebunked(?) that I think I’ve reached the limits of my mind’s ability to understand reality,” wrote Ryan Broderick in Garbage Day

    CBS News CEO Wendy McMahon told Axios that of the more than 1,000 videos the network sorted through, only 10% were usable. She cited “an influx of deepfakes and misinformation flowing into our newsrooms at a scale, at a speed, at a level of sophistication that will be staggering.”

    Misinformation and disinformation at scale have been a massive problem for society for as long as we’ve watched current events unfold on social media platforms. The difference now is that we have generative AI. And while it’s impossible at this time to quantify the exact impact generative AI has had on the current misinformation storm, there’s no getting around the fact that creating fake videos and propaganda was just made incredibly easy. Even just knowing that the floodgates have been opened is enough to make everyone question every piece of content, and therefore, reality. 

    This is the online version of Eye on AI, a free newsletter delivered to inboxes on Tuesdays. Sign up here.