At Callaway, the high-end golf-equipment stalwart, the process of making clubs has always been quite labor-intensive—from grinding and polishing clubheads to crafting wood-and-steel-shafted irons and wedges. The company has also long combined such artisanal handwork with technological innovation, even partnering with aerospace titan Boeing recently to codesign several aerodynamic clubs.
So when the company set out about four years ago to make its latest club line, called Epic Flash, it took the next evolutionary technological step, turning to artificial intelligence and machine learning for help. A typical club-design process might involve five to seven physical prototypes; for Epic Flash, Callaway created 15,000 virtual ones. From those, an algorithm determined the best design, selecting for peak performance—i.e., ball speed—while also conforming to the rules set forth by the U.S. Golf Association. Golf Digest gave the $530 Epic Flash driver a score of 20 out of 20 on its 2019 “Hot List,” the only driver to earn that honor. A human could not have achieved this kind of rapid iteration, or precision.
The Epic Flash offers just a taste of the ways in which algorithm-driven design could transform our planet as it becomes significantly more common and, in time, even the norm. “With a precise idea of the conditions this thing I’m designing will see in real life, I can design it better,” says Maurice Conti, chief innovation officer at Alpha, an innovation arm of the telecommunications company Telefónica. Though still nascent, artificial intelligence and machine learning are starting to alter our built world, from spatulas to skyscrapers, helping designers solve technical problems with unprecedented speed.
To understand the profound shift taking place, one need only consider that, according to a new report from the UN’s World Intellectual Property Organization, more than 170,000 A.I. patents have been published worldwide since 2013, adding up to 50% of all patents in the field. And in the context of design, as with most things related to A.I., the results have the potential to be both terrifying and exhilarating. There are many questions to ponder: What will the role of the designer be when algorithms can largely design for us? Will the human hand (and heart) remain key to the process? At some point, will A.I. simply take over? Will A.I. actually produce inspired design—or will it coldly prioritize cost efficiency at the expense of comfort, aesthetic pleasure, and practicality?
The pioneers of this evolution are computer-aided design (CAD) companies like Adobe and Autodesk, as well as tech giants such as Google, IBM, and Microsoft. In 2016, Adobe launched a program called Adobe Sensei, a machine-learning network that powers its Creative Cloud software and many of its other platforms. Designers who work with Adobe have started using Sensei tools like image matching and readings of customer data in order to hone their own ideas. Last fall, the company introduced Intelligent Alerts, which metaphorically peers over designers’ shoulders to recommend relevant data sets they might not otherwise have considered.
Similarly, in 2017, Autodesk—the firm that kick-started the automation of design tools in the 1980s and ’90s with AutoCAD—announced its first commercial “generative design” product, Project Dreamcatcher. Designers use Dreamcatcher to test more design options more quickly: The program generates possibilities based on user inputs about objectives, materials, cost restrictions, manufacturing methods, and so on. One early Dreamcatcher use case: General Motors is developing auto parts with these tools, which it says are helping it build both lighter and stronger pieces.
Trades like furniture-making, architecture, and fashion are experimenting with these new tools and considering their industry-shifting potential. Two playful furniture-making exercises using A.I.—one by Radical Norms, a Toronto research company, implementing a Google-powered platform; another by designers Philipp Schmitt and Steffen Weiss that put 562 Pinterest photos of 20th-century chairs into a neural network—have generated furniture that would look more at home in a contemporary art gallery than on your veranda. But they point toward a future when A.I. could become an industry standard, allowing for hyper-personalized products with body-fitting designs and customized colors and patterns while also saving significant time and money.
In fashion, the Italian e-commerce platform Yoox has been among the first to introduce an A.I.-led line, the private label 8 by Yoox. The collection was created by compiling content from across social media and the Internet, focusing on key markets, and reviewing data about trends, products sold, and customer feedback. The resulting men’s and women’s collections are unsurprising “essentials” that look sort of stylish without veering toward the outré—the outcome you might expect from designs that reflect an aggregated analysis of preferences rather than an individual artist’s flair.
Graphic design is another area that machine learning is affecting, with the venture-backed startups Tailor Brands and Logojoy at the forefront, both of them offering services that automate the making of logos, stationery, and the like for small businesses. Last year, chip designer Nvidia unveiled what it claims is the first-ever video game demo with A.I.-generated graphics. And at least one A.I. team is exploring scent design: Partnering with fragrance-maker Symrise, IBM Research recently developed perfumes using machine learning.
As the disparate list of experiments grows longer, design in general is on the precipice of a major morph, becoming far more data-driven, automated, and efficient. The risk is that design could become homogenized and lose its human dimension as designers become custodians of A.I.-driven ideas rather than users of the technology as a collaborator and tool.
There’s an alternative scenario: The importance of empathy might actually grow. As A.I. expert and venture capitalist Kai-Fu Lee points out in his book A.I. Superpowers: China, Silicon Valley, and the New World Order, there are two categories of things A.I. can’t do (at least not yet). One is carry out creative endeavors, like science, storytelling, art, and, yes, design; the other is build empathy, compassion, and trust—all of which require human-to-human connection. Only humans can truly make a product that meaningfully serves its customer. The same goes for understanding the complexities of an urban plan that serves its community.
As A.I. experts tell Fortune in the interviews that follow, the link between machine learning and design is just beginning to bear fruit. Soon enough, A.I.-led design will be an unavoidable shaper of the world around us. And it’s a certainty that design, like other industries being altered by A.I., is soon to face a host of social, cultural, and ethical dilemmas as technology changes job descriptions, reallocates resources, and changes the physical environments where we live and work. It’s all the more important, then, that designers learn how to thoughtfully and compassionately steer A.I. rather than let A.I. steer them.
Chief innovation officer, Alpha, an innovation arm of the telecommunications company Telefónica focusing on creating technologies that could reshape fields like health and energy.
“We recently built [an A.I.] prototype to assess teamwork and collaboration. Our early results show that A.I. can perform at the same level as a trained human psychologist observing a team doing the same exercises, except (A) they’re machine systems so you don’t need the psychologist—it’s super scalable; and (B) we can do it in real time, whereas a psychologist has to observe and collate results, and give an opinion or rating on collaboration.
“I think you can apply these technologies to just about every small facet of any industry. You’ll get better at designing things like wind turbines or building curtain walls for buildings because we’ll have more data—better data—and a better understanding of that data. A.I. is sort of a peanut butter you can spread across [multiple industries]. With a precise idea of the conditions this thing I’m designing will see in real life, I can design it better.”
Rana el Kaliouby
Cofounder and CEO, Affectiva, a venture-backed company whose pioneering “Emotion A.I.” is used by brands in the auto industry, education, and health care.
“I don’t see a world in which the robots are going to take over. I really worry about the more imminent threats of A.I. around bias, though, and not understanding how these technologies are going to get deployed and used. We’re very big advocates of that [broad perspective].
“In A.I., we deal with facial recognition. If you train the face detector with mostly faces of, I don’t know, white men, and then you go to Africa or Asia, it’s not going to work. I’m originally from Egypt—I’m like, ‘We need women who are wearing hijabs to be in the data set!’
“We like to think of the vehicle as an example of what this advanced interaction with technology will look like. We can also take that into the home or the office. Last fall, [Affectiva] announced a partnership with SoftBank Robotics to make social robots. We’re going to be in the lead on the emotion and social brain of the robot.”
Executive director, Center for Human-Compatible A.I. (CHAI) at UC Berkeley, and principal and chief scientist, Cambrian Group, which works with companies like GE and BMW on planning, strategy, and design.
“[CHAI’s] mission is to reinvent A.I. in a way that’s safe and compatible with human objectives and preferences. Since the beginning of engineering, a system would be built to achieve the objectives of that system. As we delegate more and more to such systems, they achieve their objectives at the expense of our preferences. For example, our preference is not to have our arm ripped off when we walk by a robot. You would think it’s an obvious preference, but it’s not programmed into every robot on the assembly line. That’s why many are in cages.
“There may be some kind of intelligence in A.I., but it’s not a mechanical copy of human intelligence. If you asked [CHAI faculty member] Stuart Russell, ‘Will you create consciousness in a machine?’—you couldn’t give him $5 billion to do that. He does not know how to do that, and I think most of the A.I. community would say the same.”
A version of this article appears in the March 2019 issue of Fortune with the headline “Remade By Data.”