As economic turbulence looms, Forrester recommends companies rein in tech spending. But not on A.I.

September 14, 2022, 5:13 PM UTC
Photo of Alphabet CEO Sundar Pichai
Alphabet CEO Sundar Pichai is among the tech executives who have been warning of a deep recession looming and preparing staff for belt-tightening. Tech research firm Forrester Research says executives in other industries should also be cautious about their tech spend this year, but that many A.I. investments should still be funded.
Jerod Harris—Getty Images for Vox Media

Welcome to this week’s edition of Eye on A.I. Apologies that it is landing in your inbox a day later than usual. Technical difficulties prevented us from being able to send it out yesterday.

A chill wind has been blowing through Silicon Valley now for several months. Big tech companies from Meta to Alphabet to Microsoft have frozen hiring in many areas and even laid off staff as top executives warn of a potentially deep recession looming. But outside of tech, many business leaders have remained more sanguine about what the next year may bring.

Such optimism may be misplaced. At least, that’s the view of influential technology research firm Forrester Research, which this week put out its budgeting and planning advice for corporate technology budgets for 2023. “Global unrest, supply chain instability, soaring inflation, and the long shadow of the pandemic,” all point to an economic slowdown, the firm wrote. It cautioned that, “slower overall spending mixed with turbulent and lumpy employment trends will make it difficult to navigate 2023 planning and budgeting.”

Forrester is recommending that companies look for ways to trim spending, in part by jettisoning older technology, including some early cloud deployments and “bloated software contracts,” (which it characterized as software a company pays for but doesn’t often use, including a hard look at whether it is paying for too many seat licenses for some products.)

When it comes to investing in artificial intelligence capabilities, however, Forrester is advocating that companies keep spending. Specifically, the research firm recommends that companies increase spending on technologies that “improve customer experience and reduce costs,” including what it calls “intelligent agents,” a phrase that encompasses both A.I.-powered chatbots and other kinds of digital assistants.

Chris Gardner, Forrester’s vice president and research director, tells me that Robotic Process Automation—in which the steps that a human has to take, such as copying data between two different software applications—are automated, often without much machine learning being involved, has been proven to increase efficiency. Adding A.I. to that equation, can push the time-and-labor savings further. “We believe this is the next step of what these bots will do,” he says. “And, especially in a time of financial uncertainty, making an argument for operational efficiency is never a bad call.” For instance, natural language processing software can take information from a recording of a call with a customer, categorize that call, and automatically take information from the transcript to populate fields in database. Or it could take information from free form text and convert it into tabular data.

Forrester is also suggesting that companies continue to spend money—although not budget-busting sums—on targeted experiments involving A.I. technologies that it terms “emerging.” Among these are what Forrester calls “edge intelligence”—where A.I. software is deployed on machines or devices that are close to the source of data collection, not in some far-off cloud-based data center. Gardner says that for some industries, such as manufacturing and retail, edge intelligence is already being deployed in a big way. But others, such as health care or transportation, “are just getting their feet wet.”

Surprisingly, one of the emerging areas where Forrester recommends businesses begin experimenting is what it calls “TuringBots.” This is A.I. software that can itself be used to write software code. Gardner acknowledges that some coders have criticized A.I.-written code as buggy and containing potentially dangerous cybersecurity holes—with some saying that the time it takes human experts to monitor the A.I.-written code for flaws negates any time-savings. But he says the technology is rapidly improving and could lead to big efficiencies in the future.

Finally, the report emphasizes that privacy-preserving techniques should be an area where companies continue to invest. “This all goes back to the trust imperative,” Gardner says. “It is not just a matter of being operationally efficient, it is also being trustworthy.” He says that when customers or business partners don’t trust an organization to keep their data safe, and not to use it in a way that is different than the original purpose for which it was collected, sales are lost and partnerships break apart. “Privacy enabled technology is critical for most organizations,” he says.

Here’s the rest of this week’s news in A.I.

Jeremy Kahn


Startup behind viral text-to-image generating A.I. Stable Diffusion looks to raise a reported $100 million at a possible unicorn valuation. That's according to a story in Forbes, which cites sources familiar with the fundraising efforts of Stability AI, the London-based company that created the popular image-making A.I. software. Interest has, according to the publication, come from venture capital firms Coatue, in a deal that would value Stability at $500 million, and Lightspeed Venture Partners which were willing to provide money at an even loftier $1 billion valuation. Either way, the deals show how much investor appetite there is in text-to-image generators, even though Stability's current version is open-source and free to use, and the startup has no clear business model. So far, the company has been funded by its founder Emad Mostaque, who formerly managed a hedge fund, and through the sale of some convertible securities, although it claims to have a string of paying customers (none disclosed) lined up to pay for ways to use its A.I. software.

Washington-based think tank raises concerns about the effect of EU's proposed A.I. law on open source developers. Brookings, the centrist D.C. think tank, has published a report criticizing portions of the European Union's proposed landmark Artificial Intelligence Act for having a possible chilling effect on the development of open source A.I. software. The think tank says the law would require open source developers to adhere to the same standards in terms of risk assessments and mitigation, data governance, technical documentation, transparency, and cybersecurity, as commercial software developers and that they would be subject to possible legal liability if a private company adopted their open source software and it contributed to some harm. Tech Crunch has more on the report and quotes a number of experts in both A.I. and law who can't agree on whether the law would actually have the effect that Brookings fears, or whether open source should, or should not, be subject to the same kinds of risk mitigation guidelines as commercially-developed A.I. systems.

Nvidia tops machine learning benchmark. ML Commons, the nonprofit group that runs several  closely-watched benchmarks that test computer hardware on A.I. workloads has released its latest results for inference. Inference refers to how well the hardware can run A.I. models after those models have been fully trained. Nvidia topped the rankings, as it has done since the benchmark tests began in 2018. But what’s notable this year is that Nvidia beat the competition with its new H100 Tensor Core Graphics Processing Units, which are based on an A.I.-specific chip design the company calls Hopper. In the past, Nvidia fielded more conventional graphics processing units, which are not specifically designed for A.I. and can also be used for gaming and cryptocurrency mining. But the company says the H100 offers 4.5 times better performance than prior systems. The results help validate the argument that A.I.-specific chip architectures are worth investing in and are likely to win increasing marketshare from more conventional chips. You can read more in this story in The Register.

Meta hands off PyTorch to Linux. The social media giant developed the popular open-source A.I. programming language and has helped maintain it. But, as it turns to the metaverse, the company is handing that responsibility off to a new PyTorch Foundation that is being run under the auspices of the Linux Foundation. The new PyTorch Foundation will have a board with members from AMD, Amazon Web Services, Google Cloud, Meta, Microsoft Azure, and Nvidia. You can read Meta’s announcement here.

British data regulator releases guidance on privacy-preserving A.I. methods. The U.K. Information Commissioner’s Office published draft guidance on the use of what it termed “privacy-enhancing” technologies. It recommended that government departments begin exploring these methods and consider using them. The document provides an excellent overview of the pros and cons of the various privacy-preserving methods:  secure multi-party computation, homomorphic encryption, differential privacy, zero knowledge proofs, the use of synthetic data, federated learning, and trusted execution environments. Unfortunately, as the ICO makes clear, many of these technologies are either immature or require a lot of computer resources or are too slow to be helpful for many use cases, or suffer from all three of those problems. You can read the report here.

One of the brains behind Amazon Alexa launches a new A.I. startup. Backed by $20 million in initial funding, William Tunstall-Pedoe has founded Unlikely AI, according to Bloomberg News. Unlikely is among a new crop of startups that are driving to create artificial general intelligence—or machines that have the kind of flexible, multi-task intelligence that humans possess. And he tells Bloomberg he plans to get there not by using the popular deep learning approaches that most other startups are using but by exploring other (undisclosed) breakthroughs. Tunstall-Pedoe founded the voice-activated digital assistant Evi which Amazon acquired in 2012. Amazon incorporated much of Evi’s underlying technology into Alexa.


Zipline, the San Francisco-based drone delivery company that has made a name for itself ferrying vital medical supplies around Africa, has hired Deepak Ahuja to be its chief business and financial officer. Ahuja was previously the CFO at Alphabet company Verily Life Sciences and before did several stints as CFO at Tesla. TechCrunch has more here.

Dataiku, the New York-based data analytics and A.I. software company, has hired Daniel Brennan as chief legal officer, according to a company statement. Brennan was previously vice president and deputy general counsel at Twitter.

Payments giant PayPal announced it has hired John Kim as its new chief product officer. Kim was previously president of Expedia Group’s Expedia Marketplace, where he helped oversee some of the company’s A.I.-enabled innovations.


Google develops a better audio generating A.I., but warns of potential misuse. Researchers at Google say they have used the same techniques that underpin large language models to create an A.I. system that can generate realistic novel audio, including coherent and consistent speech and musical compositions. In recent years, A.I. has led to several breakthroughs in audio generation, including WaveNets (in which an A.I. samples the existing sound wave and tries to predict its shape) and generative adversarial networks (the technology behind most audio deepfakes, in which a network is trained to generate audio that can fool another network into misclassifying it as being human). But the Google researchers say these methods suffer from several drawbacks: they require a lot of computational power to work and when asked to generate lengthy segments of human speech, they often veer off into nonsensical babble.

To solve these issues, the Google team trained a Transformer-based system to predict two different kinds of tokens—one for semantic segments of the audio (longer chunks of sound that convey some meaning, such as syllables or bars of music) as well as another for just the acoustics (the next note or sound.) It found that this system, which is called AudioLM, was able to create far more consistent and believable speech (the accents didn’t warble and the system didn’t start babbling). It also created continuations of piano music that human listeners preferred to those generated by a system that only used acoustic tokens. In both cases, the system needs to be prompted with a segment of audio, which it then seeks to continue.

Given that audio deepfakes are already a fast-growing concern, AudioLM could also be problematic by making it easier to create even more believable malevolent voice impersonations. The Google researchers acknowledge this danger. To counter it, they say they have created an A.I. classifier that can easily detect speech generated by AudioLM even though those speech segments are often indistinguishable from a real voice to a human listener.

You can read the full paper here on the non-peer reviewed research repository You can listen to some examples of the speech and piano continuations here.


How A.I. technologies could help resolve food insecurity—by Danielle Bernabe

Alphabet CEO Sundar Pichai says ‘broken’ Google Voice assistant proves that A.I. isn’t sentient—by Kylie Robison

Commentary: Here’s why A.I. chatbots might have more empathy than your manager—by Michelle Zhou


Much to do about ‘Loab.’
The bits of Twitter and Reddit that are fascinated with ultra-large A.I. models and the new A.I.-based text-to-image generation systems such as DALL-E, Midjourney, and Stable Diffusion, briefly exploded last week over “Loab.” That’s the name that a Twitter user who goes by the handle @supercomposite, who identifies herself as a Swedish musician and A.I. artist, gave to the image of a middle-aged woman with sepulchral features that she accidentally created using a text-to-image generator.

Supercomposite had asked the A.I. system to find the image that it thought represented the most opposite from the text prompt “Brando” (as in the actor, Marlon.) This yielded a kind of cartoonish city skyline in black imprinted with a phrase that looked like “Digitapntics” in green lettering. She then wondered if she asked the system to find the opposite of this skyline image, it would yield an image of the actor Marlon Brando. But when she asked the system to do this, the image that appeared, strangely, was of this rather creepy-looking woman, who Supercomposite calls Loab.

Supercomposite said that not only was Loab’s visage disturbing, but that when she cross-bred the original Loab image with any other images, the essential features of this woman (her rosacea-scarred cheeks, her sunken eyes and general facial shape) remained and the images became increasingly violent and horrific. She said that many of Loab’s features were still identifiable even when she tried to push the image generation system to create more benign and “pleasant” pictures.

A crazy number of Twitter posts were spent discussing what it said about the human biases around standards of attractiveness and beauty that an A.I. system trained on millions of human-generated images and their captions, when asked to find the image most opposite of “Brando,” would come up with Loab. Others wondered what it said about human misogyny and violence that so many of the Loab images seemed to be associated with gore. There was a fascinating discussion about the weird mathematics of the hyperdimensional spaces that large deep learning systems juggle and why in such space there are actually far fewer images that are the opposite of any given image than one would think.

Fascinating as this rabbit hole was (and believe me, I wasted a good hour on it myself), the whole discussion seemed to be based on a complete misreading of how @supercomposite had actually discovered Loab and what she had done subsequently. First of all, she didn’t show up in response to a prompt to find the image that was most opposite of Marlon Brando. She showed up in response to a prompt to find the image most opposite of a weird city skyline imprinted with the nonsensical phrase “Digitapntics.” What’s more, it not the case that she showed up in response to a lot of different prompts, haunting the artist like a digital specter. Rather, once she had been created, her essential features were difficult to eliminate by crossing her image with other ones. (That’s interesting, but not nearly as creepy as if Loab just suddenly started appearing in completely new images generated by completely unrelated prompts.)

Any way, Smithsonian has a good summary of much of the story here. I think the only clear takeaway from “Loab” is that it shows how little we understand about how these very large A.I. models actually work and how they store what we humans would think about as “concepts”—related images and text. As a result, large A.I. models will continue to surprise us with their outputs. That makes them interesting. But it also makes them difficult to use in ways that we are sure will be safe. And that is something businesses ought to be thinking hard about if they are going to start using these very large models as key building blocks in their own products and services.

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet