Weekly analysis at the intersection of artificial intelligence and industry.

Is this email not displaying correctly?
View it in your browser.

Send Tip
November 30, 2022

Hello, and welcome to November’s special monthly edition of Eye on A.I. David Meyer here in Berlin, filling in for Jeremy.

Stable Diffusion is growing up so fast. Barely three months after Stability AI introduced the image generator to the world, version 2.0 is out. However, while this evolution of the system produces noticeably better-quality images, the startup’s choices are proving controversial.

First, the unalloyed good. Stable Diffusion 2.0 features new text-to-image models trained using a LAION-developed text encoder that provides a real step-up in quality. The resulting pictures are bigger—768×768 is now also available as a default resolution, and pictures can now be 2048×2048 or higher. Also of note: a new depth-guided model called depth2img that can infer very different new images from an input image.

The controversy comes with the ways in which Stability AI has moved to address criticism of earlier versions. It’s made it more difficult to use Stable Diffusion to generate pictures of celebrities, or NSFW content. And gone is the ability to tell Stable Diffusion to generate images “in the style of” specific artists such as the famous-for-being-ripped-off Greg Rutkowski. While the no-NSFW change was down to cleaning up Stable Diffusion’s training data, the other changes were put into effect by the way in which the tool encodes and receives data, rather than by filtering out the artists, Stability AI founder Emad Mostaque told The Verge.

Regarding NSFW imagery, as Mostaque told users on Discord, Stability AI had to choose between stopping people from generating images of children, or stopping them from generating pornographic images, because allowing both was a recipe for disaster. That, of course, didn’t ward off accusations of censorship.

Mostaque was reportedly less keen to discuss whether the artist and celebrity-related changes were motivated by a desire to avoid legal action, but that is a reasonable assumption to make. Copyright concerns have definitely been exercising artistic communities of late. When the venerable DeviantArt community earlier this month announced its own Stable Diffusion-based text-to-image generator, DreamUp, it initially set the defaults so users’ art would automatically be included in third-party image datasets. Cue outrage and a same-day U-turn (though users will need to fill out a form to stop their “deviations” being used to further train DreamUp.)

It clearly isn’t possible to please everyone with these tools, but that’s to be expected when they’re developing at such breakneck speed, while also being available to the general public. It’s a bit like sprinting down a tight rope, and who knows which pitfalls will become apparent in the coming months.

More A.I.-related news below.

David Meyer




Swedish researchers have used A.I. to design synthetic DNA. The team at Chalmers University of Technology made DNA that “contains the exact instructions to control the quantity of a specific protein,” in the words of lead researcher Aleksej Zelezniak. The upshot could be faster and cheaper drug and vaccine development, using techniques that the team says are comparable to A.I. face-generation: “The researchers' A.I. has been taught the structure and regulatory code of DNA. The A.I. then designs synthetic DNA, where it is easy to modify its regulatory information in the desired direction of gene expression.”

Welcome to the “Matterverse”. A team from the University of California San Diego have created an enormous database of over 31 million materials that have never before been synthesized, using a graph neural network architecture called M3GNet (Nature article) that can predict their structure and properties. Over a million of those materials are potentially stable. The beauty of this deep-learning-based tool is that it works accurately across all the periodic table’s elements; previous tools in this vein tend to be either inaccurate or very limited in their scope.



San Francisco police will now be allowed to deploy robots that can kill you, by Janie Har and the Associated Press

South Dakota just banned TikTok from state-owned devices because of fears of a national security threat, by Alex Barinka and Bloomberg

Email Us
share: Share on Twitter Share on Facebook Share on Linkedin
This message has been sent to you because you are currently subscribed to Eye on A.I..

Please read our Privacy Policy, or copy and paste this link into your browser:

FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.

For Further Communication, Please Contact:
Fortune Customer Service
40 Fulton Street
New York, NY 10038

Advertising Info | Subscribe to Fortune