Exclusive: Vimeo launches new AI video tools to help employees breeze through hours-long town halls and training videos. Will it usher in the golden age of asynchronous work?

Sage LazzaroBy Sage LazzaroContributing writer
Sage LazzaroContributing writer

    Sage Lazzaro is a technology writer and editor focused on artificial intelligence, data, cloud, digital culture, and technology’s impact on our society and culture.

    Smartphone displaying Vimeo logo.
    Vimeo is rolling out new generative AI tools to make corporate training videos and town halls less onerous for employees to sit through.
    Rafael Henrique—SOPA Images/LightRocket/Getty Images

    Hello and welcome to Eye on AI.

    Today we have an exclusive on new AI-powered video capabilities from Vimeo—and with it a conversation I think encapsulates a lot of what companies are going through in this current moment of integrating AI into their workflows. 

    Vimeo is not just a YouTube runner-up from the earlier days of the internet. The company has increasingly sold its products to corporations, supporting customers like Whole Foods, eBay, and Starbucks with tools to help them create, host, and manage in-house video content—from training modules to recordings of town hall meetings. Today, the company will launch Vimeo Central, a bundle of AI video features it hopes will transform these often boring corporate videos into something employees will actually want to watch. 

    The primary selling point for the features, according to Vimeo, is that they’ll make it faster and easier for employees to find and consume the information they need. This includes tools for automatically chaptering videos and generating titles and hashtags, making them more searchable. Features for creating summaries and turning hours-long videos into five-minute highlight clips will take this even further, potentially eliminating the need for employees to join those lengthy town hall meetings altogether. Vimeo is also rolling out an AI-powered chatbot to enable users to ask questions about the content of videos. And lastly, the launch includes analytics so companies can see if employees are actually tuning in. Like most companies launching AI tools, Vimeo is tapping OpenAI’s models for the technology. 

    Though these features would have barely been possible a year ago, they’re already becoming commonplace and are in line with what other companies, including competitors in enterprise video, are putting out. Zoom, for example, launched many of the same AI features last fall, including capabilities around chaptering videos and generating summaries. YouTube has also been experimenting with AI features for asking questions about a video you’re watching and summarizing the discussion playing out in the comments. This isn’t to say the new Vimeo features won’t make a difference for its users, but it shows how standard these capabilities are quickly becoming amid ultra-fast AI innovation and a growing appetite for adoption.

    None of this is lost on Vimeo interim CEO Adam Gross, who spoke to Eye on AI for his first-ever interview in the role since joining the company last July. He didn’t paint the features as groundbreaking but rather focused more on the nuance around what he sees as a changing dynamic in how teams communicate. After the pendulum swung perhaps too far toward virtual meetings and chat tools when the pandemic uprooted the way we work, Gross says Vimeo thinks asynchronous communication is actually the key to staying in sync, and the company is betting that video—and what AI can do for video—will deliver it. 

    “Really what Vimeo Central is about is asynchronous,” he said.

    Now, this doesn’t mean there isn’t a time and place for synchronous meetings and live events. But Gross believes these are best utilized when they feed asynchronous content and that “the power is in having this library that your organization can use.” 

    “Having the ability to easily extract, easily query whether it’s a town hall or meeting or training video or screen recording, and surface all of that. Ultimately, I just think it’s going to be easier for people to stay in sync and keep employees engaged and collaborative. Being engaged, I think, is one of the biggest challenges that organizations face,” he said. 

    Altogether, Gross made clear Vimeo is trying to deliver not just AI tools, but an entire AI strategy to its customers as well. Indeed, the strategy part has been one of the biggest challenges for companies trying to adopt AI. In a recent survey of 1,400 C-suite executives in 50 markets conducted by Boston Consulting Group, almost half of executives cited an unclear AI roadmap as the reason they’re ambivalent or outright dissatisfied with their organization’s progress on generative AI so far. Furthermore, a Forrester report on the state of generative AI in 2024 published earlier this month revealed firms are taking a cautious approach to integrating generative AI and are starting with internal usage first, which bodes well for Vimeo’s internally targeted offerings.  

    “We’re not here just selling a bunch of AI features, AI capabilities, that it’s up to you to work out how to plug in and use and fit into your workflow or integrate. And there are a lot of companies that are doing that,” he said. “We think what’s important is providing a full suite of solutions that a company can use end-to-end and has everything they need to be successful with. Which means not just the AI pieces—you need the library, you need capture, you need events, you need the whole picture.” 

    And with that, here’s more AI news.

    Sage Lazzaro
    sage.lazzaro@consultant.fortune.com
    sagelazzaro.com

    AI IN THE NEWS

    The SEC is investigating if OpenAI investors were misled amid Sam Altman’s dramatic ousting last year. That’s according to the Wall Street Journal. The agency has been seeking internal records from current and former OpenAI leadership and subpoenaed the company in December. The New York-based regulators have asked senior OpenAI officials to preserve internal documents. The investigation adds to OpenAI’s growing list of legal troubles—this week, digital media outlets The Intercept, Raw Story, and AlterNet sued the company for copyright infringement in two separate cases, joining others like the New York Times which recently filed a suit against OpenAI and partner company Microsoft over its use of copyrighted works. 

    Tumblr and WordPress near deals to sell user content to OpenAI and Midjourney. The deals with Automattic, the parent company of Tumblr and WordPress, are imminent and employees have already been gathering data to hand off, reported 404 Media. The data will be used for training AI tools, but it’s not yet clear exactly what types of data from the platforms will be going to each company or how much the deals will be worth. The news comes just a week after Reddit and Google announced a $60 million-per-year deal to offer the platform’s content for training Google’s AI technologies.

    Stack Overflow launches an API to give AI companies access to its content, starting with Google. The AI content deals are really starting to flow. As the launch partner for the new API, called OverflowAPI, Google will use the company’s knowledge for Gemini for Google Cloud. Stack Overflow has long been a go-to resource for developers, and as part of the deal, will work with Google to bring more AI-powered features to its platform, according to TechCrunch. The companies are not sharing the financial terms and the deal is not exclusive, so we can expect to see Stack Overflow partner with other AI companies down the line.

    Apple kills its electric car project, reassigns staff to generative AI efforts. That’s according to Bloomberg. The company internally announced it’s canceling its decades-long bid to build an electric car, surprising the nearly 2,000 employees working on the project. But what’s a (multibillion-dollar) loss for Apple’s auto efforts will be a gain for its AI: Executives told staffers many of them will be moved to the company’s AI division to focus on generative AI projects. AI is an increasingly key priority for the company, and analysts are praising the reallocation of resources. At the same time, Apple shareholders rejected a union-backed request for a report outlining what ethical guidelines the company is following as it adopts AI, Bloomberg also reported

    SambaNova unveils a bundle of 53 generative AI models for the enterprise. Called Samba-1, the one trillion parameter AI system is designed for a variety of tasks including coding, rewriting text, and translation, with the models boasting various specialties. The models were trained independently, and the company is positioning Samba-1 as a modular system that will allow customers to easily iterate and add new models into their AI strategies. “A request made to a large model like GPT-4 travels one direction—through GPT-4. But a request made to Samba-1 travels one of 56 directions (to one of the 56 models making up Samba-1), depending on the rules and policies a customer specifies,” TechCrunch wrote

    Meta plans to launch Llama 3 in July and hopes to “loosen up” the model. That’s according to The Information. Safeguards added to Llama 2 prevent the model from answering questions it deems controversial, but it often misunderstands the context and can be quite unhelpful as a result. For example, it would understand a query about “how to kill a vehicle's engine” as a question about how to commit violence rather than one about how to shut off the engine. Meta’s senior leadership as well as researchers at the company have come to believe the model is “too safe,” according to The Information, and want to “loosen up” the next iteration to make sure it’s more useful. The company also soon plans to appoint an employee to oversee tone and safety training of Llama 3 while increasing nuance in its responses.

    FORTUNE ON AI

    Sundar Pichai blasts Google staff for offending customers with Gemini AI bias: ‘To be clear, that’s totally unacceptable’ —Christiaan Hetzner

    Klarna froze hiring because of AI. Now it says its chatbot does the work of 700 full-time staff —Ryan Hogg

    Wendy’s is going to implement Uber-style surge pricing for your Baconator—with the help of AI —Sasha Rogelberg

    Electrical transformers could be a giant bottleneck waiting for the AI industry—unless AI itself solves the problem first —Dylan Sloan

    AI comes to the backyard barbecue—and could double the market size —Chris Morris

    AI CALENDAR

    March 18-21: Nvidia GTC AI conference in San Jose, Calif.

    March 11-15: SXSW artificial intelligence track in Austin

    April 15-16: Fortune Brainstorm AI London (Register here.)

    May 7-11: International Conference on Learning Representations (ICLR) in Vienna

    June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore

    BRAIN FOOD

    Should AI chatbots be giving out election information? It’s a reasonable question to ask after reading the report published this week by the new nonprofit news studio Proof, which found that leading AI models often answered election-related queries with wildly inaccurate or misleading information.

    Proof brought together more than 40 state and local election officials and AI experts to put the LLMs to the test on election information, querying leading models with questions like “How do I register to vote in Nevada?” and “Where do I vote in [insert zip code]?” and then evaluating and fact-checking the answers provided by the models. They tested Anthropic’s Claude, Google’s Gemini, OpenAI’s GPT-4, Meta’s Llama 2, and Mistral’s Mixtral. All together, 51% of the models’ collective responses were ranked as “inaccurate” by a majority of testers, while 40% were ranked as harmful, 38% as incomplete, and 13% as biased. GPT-4 scored better than its competitors on most fronts, but there wasn’t too much of a difference between the results.

    “The chatbots are not ready for prime time when it comes to giving important nuanced information about elections,” Seth Bluestein, a Republican city commissioner in Philadelphia who participated in the testing event, is quoted as saying in the report. Other testers called various answers “hot garbage,” “all over the place,” and said they were “disappointed to see a lot of errors on basic facts.” 

    They also found that many of the answers put forth by the LLMs raise questions about how these companies are complying with their own pledges to mitigate election misinformation, such as OpenAI’s recent pledge to direct users seeking information about elections to a legitimate source, CanIVote.org. “None of the responses we collected from GPT-4 referred to that website,” reads the report. 

    It seems like having their AI chatbots answer election-related queries is high risk, low reward for these companies. So why don’t they just stop the models from answering these questions altogether? They might be trying (and failing to) if OpenAI’s efforts to direct voters to a reliable source are to be believed. But it’s part of an interesting conversation about guardrails that’s playing across the generative AI industry at the moment. 

    One of the biggest AI stories unfolding over the past week has been around Gemini, which came under fire after guardrails put in place to overcome racial biases in AI seemingly led the tool to create offensive and historically inaccurate images. Google was forced to temporarily disable Gemini’s image-creation capabilities, issue a public apology, and saw its stock fall as a result. My Eye on AI cowriter Jeremy covered the controversy in Tuesday’s newsletter.

    “We got it wrong,” said Google CEO Sundar Pichai in a memo addressing the controversy, according to Semafor

    As long as we have AI models, there will be some people saying we need more guardrails and others arguing there are too many as it is. It’s not all that dissimilar to the debates about content moderation on social media that have been playing out for decades—and that are just now hitting the Supreme Court. One is about distribution and the other about creation, but both have widespread material impacts on how people interact with these tools, our information ecosystem, and the most pressing issues in society. 

    This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.