AI’s top research conference morphed into a recruiting extravaganza, summing up a wild 2023

Sage LazzaroBy Sage LazzaroContributing writer
Sage LazzaroContributing writer

    Sage Lazzaro is a technology writer and editor focused on artificial intelligence, data, cloud, digital culture, and technology’s impact on our society and culture.

    Paul Weaver/SOPA Images/LightRocket via Getty Images

    Hello and welcome to Eye on AI.

    NeurIPS, the long-running machine learning research conference, wrapped up in New Orleans this past weekend. As you can probably guess, it was an especially buzzy year for the event. More than 16,000 attendees consisting of the world’s top AI researchers and practitioners came together for the six-day conference, and organizers say they received a record number of paper submissions: 13,330 in total, compared to 9,634 received last year. 

    The 37th annual conference had all the markers of years past, including talks, demonstrations, paper presentations, and an increasing presence of Big Tech. But while it was previously the go-to destination for the latest machine learning research, the recent explosion of interest in AI set a different tone for this year’s conference. The recent breakneck pace of development means that no one has been waiting around for NeurIPS to keep up with, let alone publish, the latest AI research breakthroughs. In fact, with a May deadline for submitting presenting papers, many of them were “ancient news” in the world of machine learning by the time the conference came around, one attendee told The Information. Instead, NeurIPS 2023 was a recruiting frenzy.

    Big Tech companies, freshly funded AI startups, financial firms, and even a host of Chinese tech companies swarmed the conference to woo top AI talent. In her dispatch from the event, Semafor Tech and China reporter Louise Matsakis described this year’s NeurIPS as “the hottest event for recruiting AI talent” and said several attendees told her they made the trip specifically in the hopes of landing a job, knowing their skills are in high demand right now as top firms shell out compensation packages that are rich even by Silicon Valley standards. OpenAI, for example, is offering researchers pay packages in the arena of $10 million.

    Firms looking to recruit worked every angle. Google sent recruiters to the event, while Sony touted full-time and internship roles listed specifically for NeurIPS attendees. Startups like Perplexity and CentML held flashy happy hours to get researchers in the door, and on LinkedIn, attendees coupled announcements that they’d be at the conference with job listings for open roles and an invitation for candidates to meet up at the event. In a more old-school approach, Matsakis described seeing a recruiter from Tencent hanging up flyers advertising research roles, which were written in both English and Mandarin, and instructed interested applicants to reach out on WeChat. Overall, she said Chinese tech companies and Wall Street trading firms were among the most prominent participants at the conference. 

    “Their presence shows how intense the competition for AI talent has become,” wrote Matsakis, also noting that “while the narrative in Washington is that the U.S. and China are decoupling their technology ecosystems, AI researchers from both countries are still engaging in plenty of mutually beneficial collaboration.”

    Job hunting and recruiting aren’t necessarily new for NeurIPS—PhD researchers in particular have always looked to the conference to promote their research and score tenure-track positions, and tech companies have increasingly turned their attention to the event over the last decade or so as well. The release of ChatGPT, however, took AI mainstream, positioned it as the next big business opportunity, and sent unprepared companies sprinting head-first into the AI era. Aside from models, data, compute, and all the technical makings of AI, these firms need AI talent above all, and NeurIPS has all the best in one place.

    I’m not one for technology predictions (and folks certainly are making a lot of them as 2023 comes to a close), but I bet we’ll see some high-profile AI hires early on in the new year as connections made at NeurIPS bear fruit. After all, there’s no sign of this talent war slowing down. Even the U.S. government, which has been scrambling to hire over 400 chief AI officers (CAIOs) by the end of the year, is citing the stiff competition and significantly higher compensation from the public sector as its main hurdle. 

    And with that, here’s the rest of this week’s AI news.

    Sage Lazzaro
    sage.lazzaro@consultant.fortune.com
    sagelazzaro.com

    AI IN THE NEWS

    OpenAI shares new safety guidelines for the company and says the board can overrule leadership decisions to stop a model from being released. Even if leadership determines that a model is safe to release, the board has the right to reverse that decision, states the new guidelines, made public yesterday. The guidelines also formalize the roles of the three safety teams which will cover different timeframes and types of risks. OpenAI additionally said it wants to “move the discussions of risks beyond hypothetical scenarios to concrete measurements and data-driven predictions” and that it will continually update “scorecards” for its models, including reevaluating frontier models at every 2x effective compute increase during training runs. 

    Intel introduces AI-optimized PC processors. After first previewing technical details at its developer event back in September, the company this past week officially announced its first wave of new AI-enabled processors and an initial line of devices that are using them, including Dell, Microsoft, and Lenovo laptops that are already on sale (with more coming at CES). The new processors, known as “Meteor Lake,” feature a special onboard neural processing unit (NPU) designed to let consumers run AI capabilities directly from their PCs instead of through the cloud. These chips are different than the “Gaudi” AI processors for data centers that Intel is developing to take on Nvidia.

    Tesla recalls two million cars following Autopilot investigation, threatening its defense in half a dozen crash-related lawsuits coming up next year. That’s according to Bloomberg. The company this past week issued a remote software update (a type of recall) aimed at improving driver attentiveness after the top US auto-safety regulator found its Autopilot system fails to keep drivers’ attention. The finding comes after a yearslong investigation and is the exact issue underlying several upcoming lawsuits, such as those in Florida, California, and Texas, which allege that Tesla’s autopilot is improperly being used on roads for which it wasn’t designed and fails to sufficiently warn drivers who became disengaged. It’s also another setback for the autonomous driving industry after the recent problems at GM’s Cruise

    Election misinformation is systemic in Microsoft’s GPT-4 powered AI 'Copilot,’ researchers find. According to a new study reported on by Wired, the Microsoft chatbot often replies to questions about elections with lies, conspiracy theories, outdated, and incorrect information, offering answers with factual errors one-third of the time. In addition to giving false information about basics like election dates, the model hallucinated candidates’ positions, gave answers using flawed data-gathering methodologies, and directed users to far-right sites promoting widely debunked theories.

    EYE ON AI RESEARCH

    Privacy progress. One of the top awarded papers at NeurIPS comes via three Google DeepMind researchers. Titled “Privacy Auditing with One (1) Training Run,” the paper presents a novel technique for examining the differential privacy (DP) of LLMs using just a single training run. 

    Differential privacy refers to a mathematical concept that ensures personal information is fully shielded when data is processed or analyzed. The proposed technique uses the connection between differential privacy and statistical generalization to perform the analysis, requiring minimal assumptions about the algorithm. It can also be applied in both black and white box settings. Overall, the findings vastly simplify previous techniques for privacy auditing, which required several runs (and thus much more expensive and resource-intensive compute). You can read the paper here.

    FORTUNE ON AI

    AI is exposing awkward ties on Meta and Microsoft boards, as heavy hitting directors Marc Andreessen and Reid Hoffman bet on startup rivals —David Meyer

    An army of 100 million bots and deepfakes—buckle up for AI’s crash landing in the 2024 election —Alexei Oreskovic

    AI will be an even bigger HR focus in 2024. Here are 4 ways it will disrupt the functionPaige Mcglauflin and Joseph Abrams

    3 AI trends that will shape your workforce in 2024 —Sheryl Estrada

    Everything we learned at Brainstorm AI this year —Alexei Oreskovic

    BRAINFOOD

    Beyond playtime. Like every sector, the toy market is lighting up its AI engines. A recent Washington Post story about Grok, an AI-powered plush toy designed for kids, gave a peek into what’s coming down the pipeline. 

    Created by Silicon Valley startup Curio and tapping OpenAI’s technology, Grok is essentially a fluffy, rocket ship-shaped ChatGPT you can hold in your hands. Its creators are pitching it as an alternative to screen time and see it as an entirely new hardware medium. The musician known as Grimes, an investor and advisor in the company, is providing the voice for the toy. (Elon Musk isn’t involved, although the billionaire Tesla CEO has also named his AI company’s chatbot Grok, oddly enough.)

    On multiple levels, Grok looks just like another AI-powered toy I demoed back in 2015. Shaped like a friendly green dinosaur, CogniToys pitched itself as the first toy powered by IBM’s Watson raised $275,00 on Kickstarter, and sold for years on Amazon and with retailers like Walmart. The glaring difference is that now the technology, to some degree, really works. For that same reason, Grok and other AI-powered toys coming down the pipeline are not going to proliferate without intense scrutiny. 

    While CogniToys freely pitched its Dino as an educational toy, Curio is side-stepping all links to education as the entire education sector scrambles to grapple with AI and ChatGPT continues to prove unreliable for giving accurate information. There are also questions about how these toys will impact children, their development, and what it will mean for children to potentially form close social bonds with AIs. According to the creators, their hope is that Grok will have some degree of “pseudo consciousness,” serve as an assistive technology for parenting (such as telling kids when to go to bed), and even allow parents to tune the conversations to represent their personal beliefs. Those are not small hopes, and they go far beyond playtime. Grok isn’t just a toy launch; it’s a whole new can of worms.

    This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.