This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.
The sun is starting to set on the Wild West days of artificial intelligence. The sheriff and his posse have just ridden into town.
In this case, it’s laws, not lawmen, that are coming for A.I., heralding the end of the era of “self-regulation.” First, the European Union last week unveiled its proposed Artificial Intelligence Act.
It’s a sweeping 108-page piece of legislation that would ban the use of A.I. for what it terms “manipulative, addictive, social control and indiscriminate surveillance practices.”
It would also impose strict requirements on what it calls “high-risk” uses of A.I. These include critical infrastructure where people’s lives and health at risk; educational and vocational settings where A.I. is used to determine access to teaching or training; employment and worker management; essential private and public services, including financial services such as loans; law enforcement; migration, asylum, and border control; and the administration of justice.
In these high-risk areas, companies need to ensure they’ve assessed the risks and taking steps to mitigate any dangers. They also have to maintain audit trails, guarantee the data they’ve used to train the system is of “high-quality,” and ensure that “meaningful human oversight” of the A.I. system is maintained.
Yes, many of these terms, as critics of the proposed law have pointed out, remain ambiguous and frustratingly ill-defined, opening up a potential legal morass if they aren’t clarified later in the rule-making process. And yes, it will likely take two years for the law to wend its way through the EU’s legal sausage-making. But there’s no mistaking the landmark nature of the legislation, which is the first effort by a government anywhere in the world to wrap its arms around A.I. and all its potential uses for good and ill.
Some critics complained the law didn’t go far enough. The Civil Liberties Union for Europe, for instance, worries that the proposed law leaves too many exemptions that could still lead to widespread deployment of controversial technologies, such as facial recognition software, by government and law enforcement. “There are way too many problematic uses of the technology that are allowed, such as the use of algorithms to forecast crime or to have computers assess the emotional state of people at border control,” Orsolya Reich, a senior advocacy officer at the civil rights group said.
Fair Trials, a criminal justice-focused NGO, also came out against the proposals, saying they lacked meaningful safeguards to protect against discrimination and uses of A.I. that undermine the presumption of innocence.
While civil liberties groups say the rules are too lenient on government, big U.S. tech companies are already lining up against the law on the grounds that it is too strict. Benjamin Mueller, a policy analyst at the Center for Data Innovation (a think tank that is indirectly funded by U.S. tech companies and often takes positions favorable to them) dismissed the law as “a thicket of new rules that will hamstring companies hoping to build and use A.I. in Europe” and cause the Continent “to fall even further behind the United States and China.”
The reasons Big Tech might be opposed are clear enough: the EU Artificial Intelligence Act has been created in the model of Europe’s landmark data privacy law, the General Data Protection Regulation (GDPR), with the same globe-spanning reach. It applies to any A.I. system that affects any citizen or resident of the 27-nation European bloc, no matter where the company developing that software is based. And the fines for violating the law are, as with GDPR, substantial: up to 20 million euros ($24 million) or 4% of global sales, whichever is larger.
Anu Bradford, the Columbia University law professor who wrote The Brussels Effect, a book about the EU’s ability to leverage access to its markets for geopolitical influence, told me that the EU clearly hopes to establish a de facto global standard on A.I. She said the reasons were as much about economic self-preservation as about European values concerning individual rights and liberties: it doesn’t want European companies, many of which are already lagging in A.I. development and deployment, to be relegated to irrelevancy by American and Chinese companies that are unbound by any niceties concerning A.I. ethics.
The bet here is that most companies won’t want to develop one A.I. system just for European customers and employees and a different one for everyone else. Most A.I. systems perform better the more data you feed them, so there’s an advantage to training a system on European data. And there are lots of practical reasons why creating separate systems for separate regions will be difficult.
Europe is also hoping that lawmakers in other regions use its new A.I. regulatory framework as a model for their own laws governing the technology, as has happened to some degree with GDPR.
Elements of both GDPR and the new EU Artificial Intelligence Act have already been echoed in a piece of U.S. legislation, the Algorithmic Accountability Act, that was introduced in the U.S. House of Representatives last Congress but never made it out of committee. A version of the law may be revived in the current Congressional session.
Meanwhile, the Federal Trade Commission last week took a small, but historic step in the direction of regulating A.I., issuing updated guidance that made it clear using an algorithm that resulted in discrimination would constitute “unfair or deceptive practices” prohibited by the FTC Act. It also warned companies they could the fall afoul of the law by gathering training data for their A.I. algorithms in a misleading way or if by over-hyping what their A.I. system can actually do. “Keep in mind that if you don’t hold yourself accountable, the FTC may do it for you,” the agency warned in unusually stark language.
The winds from Europe and the U.S. are both blowing in the same direction: significant regulation of A.I. is coming. Get ready.
Below you’ll find the rest of this week’s A.I. news.
Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com
****
Fortune has launched a new section, Fortune Education, to help you prepare for the new world of work. There’s an urgent need for degree programs that level up workers’ skills. Designed to be your guide in navigating the changing world of education, our team of experienced editors rank, rate, and recommend the right solutions for you and your career.
Our first list highlights the best online MBA programs. Later this year, you can expect rankings on the best traditional, part-time, and executive MBAs, along with recommendations for the top data science and analytics degree programs. Our mission will take us even further as we cover all types of personal and professional improvement programs aimed at your continued success. Check it out here.
A.I. IN THE NEWS
Toyota buys Lyft's autonomous driving division. The ride-sharing company Lyft has agreed to sell off its self-driving car division to Toyota for $550 million, according to The Wall Street Journal. Lyft said the sale would help it reach profitability sooner. It follows the decision by rival Uber to also abandon its efforts to create an autonomous vehicle and sell the research group working on it to self-driving startup Aurora. We may one day get to a world of self-driving taxis, ordered on demand from our phones, but that moment is a lot further off than Lyft and Uber initially predicted and when it arrives, it looks like the companies building the cars will remain largely separate from those operating roving fleets of robo-taxis.
Speaking of self-driving cars, Baidu invests $7.7 billion in robotic "smart" cars. A report from Reuters says the Chinese search giant has formed a joint-venture with automaker Geely called Jidu Auto that will invest $7.7 billion over the next five years in producing "smart" cars. Xia Yiping, Baidu's CEO, said Jidu would create its first electric vehicle within three years and that the car would "look like a robot." He said Jidu would sell cars directly to consumers without a dealership network.
U.S. banks are deploying camera surveillance A.I. systems on a wide scale to combat fraud. Among the banks installing the surveillance technology are J.P. Morgan Chase & Co., Wells Fargo, and City National Bank of Florida, Reuters said. The A.I. systems being used, which are produced by a variety of vendors or built in-house using components offered through various large cloud service providers such as Google, IBM and Amazon, include facial recognition. This is being used to identify customers and employees, and may in the future be used to find people on watchlists who are either known criminals or suspected fraudsters, according to bank executives Reuters interviewed. Chase said it was using the system in trials in branches in Ohio to figure out when the branches are most crowded so it can better schedule staff and improve customer service. But civil liberties groups are concerned about the undisclosed use of the camera systems, especially given already documented cases of mistaken identity with some facial recognition software used by police.
The biggest A.I. chip yet. Cerebras, the startup that is known from producing massive semiconductor wafers that are designed to run deep learning A.I. systems more effectively than other kinds of chips, has produced its largest chip yet: 2.6 trillion transistors on a silicon die that is about the size of a tablet computer. One of the big advantages of that massive wafer is not just the number of processing cores, the real brains of the chip, that it can pack in (850,000, more than twice what the previous generation of the chip contained) but also the amount of memory that can be co-located next to those processing units on the chip itself: 40 gigabytes. The chip is designed to provide big leaps in the time and energy needed to train neural networks, according to a story on the new chip in tech publication The Register.
EYE ON A.I. TALENT
J.P. Morgan Chase & Co. has hired Eisar Lipkovitz to be its new chief information officer for its corporate and investment bank. Lipkovitz had previously been executive vice president of engineering at Lyft and head of its RideShare business, according to Times News Express. He is also spent 15 years at Google in a variety of roles.
Renault Group, the European auto giant, has hired Luc Julia as group chief scientific officer, according to a company press release. Julia is a tech industry veteran who, during a brief stint at Apple in 2011, was credited as being the co-creator of the Siri digital assistant. More recently he has been the chief technology officer at Samsung Electronics.
EYE ON A.I. RESEARCH
Transformers have already conquered language. They are about to "transform" video generation too. One of the most important A.I. developments of the past five years has been the advent of a neural network architecture called a "Transformer." Originally developed for computer vision by a team at Google, Transformers were then adapted for natural language processing, where they have underpinned rapid advances. Among the most notable Transformer-based A.I. systems has been OpenAI's GPT-3, which can compose long passages of coherent and human-level writing in a wide variety of styles and formats. Now a team of researchers from the University of California at Berkeley, including well-known roboticist Pieter Abbeel and Aravind Srinivas, a graduate student who has focused on adapting Transfomers for tasks such as visual recognition, have created a system called VideoGPT that can generate novel, naturalistic-looking videos.
The neural network architecture the researchers used is very similar to that which underpins the new natural language processing models and which has been used before to create still images. But creating videos is a more difficult task. The researchers say that the system produces results that are competitive with videos created using the best generative adversarial networks (or GANs), which is the technology behind deepfakes, but requires far less training for each new video and uses far less computational power to train and run. That could make this technology even more accessible for tasks such as generating synthetic data for training robots or creating videos without having to hire actors. The research was published this past week on the non-peer reviewed research repository arxiv.org.
FORTUNE ON A.I.
This A.I. startup is saving Walmart and other big companies millions by automating negotiations—by Jeremy Kahn
What Apple’s big privacy changes to iOS mean to you—by Danielle Abril
The drone future flies even closer—by Aaron Pressman
A.I.’s carbon footprint is big, but easy to reduce, Google researchers say—by Jeremy Kahn
Europe proposes strict A.I. regulation likely to have an impact around the world—by Jeremy Kahn
BRAIN FOOD
Are there enough datacenters in Finland to save the planet? That is a question we all might be asking after researchers at Google published the most complete look yet at the carbon footprint of training very large neural networks, such as the A.I. systems that underpin many breakthroughs in natural language processing. These models are considered massive, taking in hundreds of billions of parameters, and they take serious amounts of energy to train. The Google paper is notable because it claims far better accuracy for the amount of energy A.I. algorithms consume. It concluded that previous studies had massively over-estimated how power-intensive it really is to run A.I. systems in the cloud. That said, the carbon footprint of A.I. is still big and getting bigger all the time as more and more companies deploy more and more A.I. algorithms and the size of these algorithms also grows dramatically.
For instance, the Google team was able to get exact data from San Francisco A.I. company OpenAI about how, when and at which datacenters they trained GPT-3, the 175 billion parameter language model that can generate long passages of coherent writing: the system was trained on Nvidia V100 graphics processing units running in a Microsoft datacenter and it consumed the equivalent of 552 metric tons of carbon dioxide during its training. That's the same as operating 120 passenger cars for a year, according to an EPA equivalence calculator.
But the researchers said there were ways to drastically reduce the CO2 burden of A.I.: using new computer chips specifically designed for training large neural networks, using more efficient "sparse" algorithms, and, most importantly, shifting the training of A.I. systems to large cloud datacenters located in places where the electric grid is greener. For Google, the researchers noted, the best place to train an A.I. algorithm in order to lower their carbon footprint was the company's datacenters in Finland. But are there enough datacenters in Finland to really power all the A.I. systems the world is about to deploy? Probably not. That means companies will need to carefully monitor the energy mix used to power the electric grid in the specific datacenters they are using (a figure that can change hourly in some cases) and look for efficiencies in both the kinds of algorithms being used and the kinds of computer chips they are using.
The paper is worth having a look at. But it is also important to mention that it has sparked outrage among many A.I. researchers for something it doesn't do: cite a previous paper by Google A.I. ethics co-lead TImnit Gebru and her team that also raised the issue of the carbon footprint of ultra-large language models. It was that paper by Gebru that Google refused to let her and her team submit for a conference, claiming it didn't meet the company's standards. Gebru's wrangling with her managers over that decision touched off the chain of events that led to Gebru, one of the few Black A.I. researchers at Google, being forced out of the company amid public outcry. One of the executives who had a role in Gebru's ouster, Jeff Dean, is a co-author on the new paper on reducing A.I.'s carbon footprint. On Twitter, some researchers pointed out that failing to cite Gebru's prior work in the same area seems like exactly the kind of lapse he had said was a reason her paper didn't meet the company's standards for publication.
The apparent double-standard—as well as what some critics saw as an attempt to erase Gebru's contribution on the subject—has threatened to re-ignite the firestorm over the way Google dealt with Gebru and subsequently fired her co-lead Margaret Mitchell. That was probably not the reception Google hoped this latest paper would receive.
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.