Europe wants businesses to share their data and open up their A.I. systems for scrutiny

February 19, 2020, 3:23 PM UTC

The European Union can hardly be counted as one of the data industry’s big winners, with its tech firms more often than not playing second fiddle to rivals from the U.S. and China.

But the European Commission, which proposes EU legislation, wants to change that. It hasn’t come up with new laws just yet—stay tuned for a big reveal later this year—but on Wednesday it laid out what Commission President Ursula von der Leyen called “our ambition to shape Europe’s digital future.”

The EU is well-known for strongly regulating the use of personal data—information that can be linked to an identifiable individual. But there’s a lot of data that doesn’t fall into that category, relating instead to industrial processes, or comprising anonymized health and transportation datasets.

That’s where the Commission sees opportunity, particularly if it can convince both the public and private sectors to open up their data troves.

“Our society is generating a huge wave of industrial and public data, which will transform the way we produce, consume and live,” said Thierry Breton, the commissioner for the EU’s internal market. “I want European businesses and our many SMEs to access this data and create value for Europeans—including by developing artificial intelligence applications. Europe has everything it takes to lead the ‘big data’ race, and preserve its technological sovereignty, industrial leadership and economic competitiveness to the benefit of European consumers.”

Speaking of artificial intelligence, the Commission also published a whitepaper that presents policy options for regulating A.I., EU-style—that means a big focus on people’s rights, explainable algorithms, and very tight regulation of facial-recognition systems.

Share and share alike

The idea of sharing and reusing data is not new in Europe. Last year, a new EU law came into force that is supposed to encourage the practice within the public sector, and the Commission also issued non-binding guidelines for businesses that want to do the same. But what, in the case of the private sector, does data sharing actually mean?

As explained in one of Wednesday’s many strategy documents, companies should “have easy access to an almost infinite amount of high-quality industrial data” that allows them to create new products and services. “The organizations contributing data would get a return in the form of increased access to data of other contributors, analytical results from the data pool, services such as predictive maintenance services, or license fees,” the Commission suggested.

In other words, companies wouldn’t be forced to share the data they record in their factories and through their networks of Internet-connected sensors, but rather coaxed to do so.

Margethe Vestager, the Commission’s digital policy chief as well as its competition czar, said in a press conference that many companies are keen to share their industrial data, but fear antitrust implications.

“A lot of people come to us and say, ‘We would like to co-operate but we are afraid that you would see a cartel,'” she said. “We say, ‘Come to us with more specific proposals and we will make sure that we see no cartel’…If someone comes this afternoon, we stand ready to give specific guidance as to how to set this up.”

Although Wednesday’s announcements were not generally about personal data, there was some crossover into the territory covered by the General Data Protection Regulation, the tough privacy law that came into force in 2018.

The Commission said it would “explore how to give citizens better control over who can access their machine-generated data.” It also floated the idea of “technical tools and standards” that could help people exercise their GDPR-enshrined right to transfer their personal data from one platform—such as Facebook or Amazon—to a rival platform. This, the EU executive said, would “enable novel data flows, protect consumers and foster competition.”

A.I. regulation

When it comes to artificial intelligence, the Commission is trying to draw a distinction between low-risk and high-risk use cases. Where the risk to people’s fundamental rights is low, regulation should be light-touch. Here, the Commission is talking about voluntary schemes to let companies label their A.I. applications as safe. But where the risk is high, such as in health or policing applications, regulation will be extremely tight.

In these high-risk cases, there will have to be human oversight, the Commission said. And that means giving regulators the ability to inspect what is going on inside the black box. “Authorities should be able to test and certify the data used by algorithms as they check cosmetics, cars or toys,” the Commission said.

These inspections would need to take place before high-risk A.I. services are launched in the EU. What’s more, such systems will need to be trained on “unbiased data” so as to minimize the risk of discrimination.

As for the hot topic of facial recognition, this is an area that is already covered by the GDPR. There’s no conflict with the type of facial recognition systems that are used to unlock people’s phones or check identities at the border, Vestager noted, but there is a general prohibition on the type that identifies people as they walk down the street. The GDPR makes explicit consent a condition for processing biometric data, and merely going out in public spaces does not qualify as explicit consent.

However, the Commission might make Europe’s facial recognition rules even tougher. At the moment, Europe’s rules do allow for some deployments of the technology in public spaces, such as for targeted policing operations. The Commission said Wednesday that it “wants to launch a broad debate about which circumstances, if any, might justify such exceptions.”

The A.I. whitepaper is open for consultation until May 19, and the Commission also said it is “gathering feedback” on its data strategy.

So, let the lobbying begin. Big Tech will certainly be trying to bend the Commission’s ear as much as possible in the run-up to its legislative proposals late this year. After that, it will lobby members of the European Parliament as they scrutinize the draft laws—and a lot of meetings will be conducted in Europe’s capitals, too, as member states will have the final say on whatever is proposed.

But the likes of Facebook and Google may not find a warm reception. The strategy unveiled Wednesday was couched in terms of “Europe’s technological sovereignty” and—as demonstrated by the failure of U.S. lobbyists to significantly soften the GDPR or last year’s Copyright Directive—the EU’s lawmakers are no friends to Silicon Valley.

When Facebook CEO Mark Zuckerberg visited Brussels on Monday, ahead of the tech-strategy publication, he came proclaiming enthusiasm for new regulation, but got a cold shoulder from lawmakers who said Facebook should ideally clean up its own act without regulators forcing it to do so. On this side of the Atlantic, there is none of the deference afforded to the likes of Zuckerberg on Capitol Hill.

“My pledge is not to make Europe more like China or more like the U.S.,” Vestager said Wednesday. “My pledge is to make Europe more like herself.”

More must-read stories from Fortune:

—How Apple defied the odds to post the biggest quarterly profit ever
—Oracle and Google are about to face off in tech’s trial of the century
—Can San Francisco be saved?
—Did the ‘techlash’ kill Alphabet’s city of the future?
—Predicting the biggest tech headlines of 2020

Catch up with Data Sheet, Fortune’s daily digest on the business of tech.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward