An influential Vatican institute and top officials from the European Union, United Nations, IBM and Microsoft have joined forces to push for ethical development of artificial intelligence.
The officials have signed a document, called the “Rome Call for AI Ethics,” that was presented to Pope Francis on Friday.
“A.I.-based technology must never be used to exploit people in any way, especially those who are most vulnerable,” the document says. “Instead, it must be used to help people develop their abilities and to support the planet.”
The document lays out six broad principles for the ethical development of A.I. It was created under the auspices of the Pontifical Academy for Life, a Vatican institute tasked with reconciling the Catholic Church’s teachings on the sanctity of human life with emerging issues in science and law.
In the past, the academy’s views have shaped the Church’s sometimes controversial positions on such emerging fields as stem cell research, gene editing, and cloning.
Vincenzo Paglia, the academy’s president, said that the ethical principles contained in the letter “must become the patrimony of everyone and not only the experts on artificial intelligence.”
Brad Smith, Microsoft’s president, and John Kelly III, executive vice president of IBM, both said their companies were proud to support the Rome Call. “We’re grateful for the Vatican’s leadership in pushing for the A.I. era to be underpinned by ethics,” Kelly said in speech at the academy in Rome.
In addition to Kelly, Smith, and Paglia, David Sassoli, president of the European Parliament, and Dongyu Qu, director general of the United Nations’ Food and Agricultural Organization, also signed the Rome Call.
Last year, Pope Francis, speaking before a group that included representatives of a number of Silicon Valley companies, warned that technologies such as A.I. could, if not guided by “both theoretical and practical moral principles,” lead to “a new form of barbarism.” He also said he was worried about A.I.’s ability to feed disinformation campaigns that “poison public debates and even manipulate the opinions of millions of people, to the point of endangering the very institutions that guarantee peaceful civil coexistence.”
The new ethics document puts the Catholic Church at the forefront of major world religions in working toward a possible doctrine on artificial intelligence. But the Vatican is joining a long list of governments, companies, and nongovernmental groups that have articulated A.I. ethics principles in the past three years. In just the past month, for instance, the European Union laid out its strategy for A.I. development and regulation in a white paper, and the U.S. Department of Defense issued ethical guidelines for the use of A.I.
Some data ethicists and human and civil rights campaigners have criticized these earlier efforts for being too vague, sidestepping explicit prohibitions on uses of A.I. such as facial recognition in government surveillance systems, the use of algorithms in criminal justice or the creation of autonomous weapons. Some computer scientists, meanwhile, have faulted them for being either impractical or potentially too restrictive, limiting beneficial uses of A.I.
Microsoft and IBM are both part of an organization called the Partnership on AI, that has also articulated a set of ethical principles to guide A.I. developers, but been accused by some critics of using its influence to water down government regulation of the technology. That organization also includes, among others, Facebook, Google and Amazon, three large technology companies that have put A.I. at the core of their business plans and which are notably absent from the Rome Call, although they may be allowed to sign the document at a later date.
The Rome Call’s six principles are transparency, inclusion, responsibility, impartiality, reliability, and security and privacy. And, as with other ethical A.I. charters, it too may come in for criticism for vague, impractical and generally lacking enforcement mechanisms.
For instance, under transparency, the Rome Call says “in principle, A.I. systems must be explainable.” But some of the most effective machine-learning algorithms are also extremely difficult for humans to interpret. Some prominent A.I. researchers worry that a requirement that such algorithms be explainable will limit or delay many potentially beneficial uses of the technology in areas such as medicine.
The document also doesn’t specify whether an algorithm needs to be explainable to all stakeholders—its creators, its users, and those impacted by its output—or just some of them.
Reliability is also an ill-defined term, with A.I. researchers disagreeing about what standard should be used for measuring how robust an algorithm is.
IBM, along with Microsoft, sees the sale of A.I.-enabled services and access to cloud-based data centers for A.I. applications a key revenue driver. Kelly said in his speech at the institute, however, that IBM put A.I. ethics before profits. “We don’t simply hand over our technology to anyone who will pay for it,” he said. “We ensure clients we work with are using technology in ways that better our human experience.”
Microsoft, for its part, has expressed reservations about the use of facial recognition technology and said it will never sell the technology for government surveillance.
Neither company, however, has said whether they will take steps to police the applications customers run on their cloud services. In some cases, customers use these clouds to run A.I. software they’ve developed internally or bought through third-party providers. Microsoft, for instance, recently won the Pentagon’s controversial Joint Enterprise Defense Infrastructure (JEDI) project and will run many of the U.S. military’s workflows, including A.I. applications involved in war fighting. Some of that work—such as using drones for targeted killings—might not sit easily with the Vatican’s ethical principles.
More must-read stories from Fortune:
—New tech-centric Mastercard CEO has his eyes on the fintech prize
—Did the ‘techlash’ kill Alphabet’s city of the future?
—How technology is changing how we volunteer
—Credit Karma was acquired rather than pursuing an IPO. Will more companies follow suit in 2020?
—Half of U.S. local government offices haven’t upgraded their ransomware defenses since 2019’s online crime spree
Catch up with Data Sheet, Fortune’s daily digest on the business of tech.