Since August 2018, Lynne Parker has worked in the White House Office of Science and Technology Policy, helping steer the executive branch’s approach to research, innovation, policy, and investment in artificial intelligence. Under the Biden administration, she was promoted to director of the National Artificial Intelligence Initiative Office in early 2021, a role that puts her at the forefront of the country’s domestic and international investments and education in the A.I. space.
As the final speaker at the Fortune Brainstorm A.I. conference in Boston this week, Parker discussed this relatively new position and shared the initiative’s three main goals. “The first one is all about research. The innovations that we’re being able to take advantage of today, many of them are due to decades of investments by the federal government and research, so there’s a recognition that the advances for the next decade need to have that research underpinning as well,” she said. “A second goal is in making sure that the U.S. leads the world in the development and use of trustworthy A.I. in both the public and private sectors. That means the federal government itself, and its use of trustworthy A.I., as well as the private sector. Obviously, there are many things that need to happen for that to take place. And then the third [goal] is to make sure that we’re educating people to have the skills and the talents and the opportunities to learn about the A.I.-enabled jobs of both today and the future.”
Parker revealed that her office is currently in the process of forming a national A.I. advisory committee that will consist of members of the academic and business community “to make sure that we democratize access to the cyber infrastructure and the data that’s needed to enable anyone to engage in A.I. research.” She also said it’s key that the government is invested in long-term fundamental research while businesses are working more on the product side, making it a public-private partnership that covers all of the A.I. bases.
Despite the fact that the two major parties are at odds on so many issues, Parker said that, when it comes to A.I., most of the legislation is passed on a bipartisan, bicameral basis. There isn’t a red-blue divide when it comes to A.I.—rather, there’s a difference in how politicians want to handle it.
“There are many people that care very deeply about the responsible use of A.I. and are concerned that maybe any A.I. may turn into negative A.I. That’s more on one end of the spectrum,” she said. “On the other end of the spectrum is the recognition by some that if we don’t have innovation, then we’re not going to be able to take advantage of the many benefits of A.I. But I think you see people on all sides of the political spectrum recognizing the importance of balancing innovation with the proper governance and having responsible use of A.I.”
When asked if the United States should have a vision for the regulation of A.I. that’s similar to the EU’s General Data Protection Regulation, or GDPR, Parker said it’s a must.
“There’s a growing recognition that if we have just a patchwork of regulatory approaches, it’s not helping innovation at all,” she said. “This is a really good example of how regulation can actually spur innovation. If you don’t have trust in these technologies, then you’re not going to be able to use them. Trust can come partially through governance, it could be regulations, and so forth. So, the right regulation can actually spur innovation, and there’s also a strong recognition that we around the world need to do this together. It’s important for like-minded democracies to come together and say, ‘Hey, these are technologies that we want to use in a way that affirms democracies.’ So how do we do that? I think we do that by looking consistently at how the regulation needs to take place, and certainly the EU A.I. act is a very good comprehensive approach that the U.S. should consider.”
International cooperation is also a big issue for Parker’s initiative, as she discussed when asked to compare the U.S.’s approach to A.I. versus China’s.
“When it comes to the responsible use of A.I., there are a lot of concerns about many of the use cases that authoritarian countries, including China, are demonstrating,” she said. “Obviously, we don’t want to use technology in order to repress people or to suppress opinions and so forth. That’s why this administration has been very firmly behind the idea of how we can build up technologies that affirm democracies. It’s the idea of having technologies that are inherently used in a way that’s responsible. Let me give you an example, which is privacy-enhancing technologies or privacy-preserving machine learning; these are kinds of techniques that can protect people’s privacy while at the same time [making] good use of that data. One of the big advantages that China has is they have massive amounts of data. But if we can have more and more use of technologies like privacy-enhancing technologies and use these in cooperation with like-minded democracies from around the world, then we’ll be better able to compete in domains where our competitors don’t necessarily have any guardrails to drive their work.”
This cooperation extends to cybersecurity, an area that’s beset by bad actors using machine learning to exploit vulnerabilities in government and corporate IT systems. “This is certainly a challenge with many kinds of technologies—and it’s not unique to artificial intelligence—the idea that you’ve got dual uses for these technologies,” she said. “I think, as with any technology, we need to make sure that we have the right rules in place, the right governance in place, to make sure that as we’re using these technologies, we keep in mind the guardrails that we need. China has a civil-military fusion, and they recognize this as well. What we want to do is always do it the right way on our side. There are a number of principles that the Department of Defense has set forth, as well as the civilian agencies and the intelligence community, so always operating according to our principles is a way to make sure that we’re always doing it the right way.”
Finally, she touched on what’s been a running theme at Fortune’s Brainstorm A.I. conference—the need to make sure artificial intelligence is equitable and ethical. Right now, she said, both the private and public sectors are still figuring out the “appropriate” ways to determine what’s responsible when it comes to transparency and explainability.
“It keeps coming back to the issue of trust. How do you trust a system if you don’t know how it’s making decisions or making recommendations?” she said. “We need the ability, in some cases, to understand what the A.I. system is doing and for it to be able to explain to me if I have a really good reason not to trust a system. If it’s making a decision based on some crazy reasoning, I need to know that. But I think it’s also helpful to compare it to other kinds of systems or other kinds of situations, like maybe certain treatments for diseases. We don’t fully understand how all drugs and treatments work, we just know that, empirically, they work. So there are some cases where we might be able to say, ‘Okay, this A.I. system is working. No one’s life is at risk, no one is going to have their civil rights violated.’ For lower-risk use cases, we might not need to have a deep and robust explanation. But it’s true—in many applications, there’s a tradeoff right now. Either it’s a highly accurate system that you don’t fully understand, or it’s a more understandable system, but the level of performance is not as high. Hopefully, as research advances, we will not have to make that tradeoff anymore. We’ll have high-performing systems that we also understand. That’s the goal with the research.”
More must-read business news and analysis from Fortune:
- From Delta to Southwest, the airlines in the best—and worst—shape going into a chaotic holiday season
- How a risky bet on the Shiba Inu coin made this warehouse manager a millionaire
- Patagonia doesn’t use the word “sustainable.” Here’s why
- Will monthly child tax credit payments continue in 2022? Their future rests on Biden’s Build Back Better bill
- “I’m afraid we’re going to have a food crisis”: The energy crunch has made fertilizer too expensive to produce, says Yara CEO
Subscribe to Fortune Daily to get essential business stories delivered straight to your inbox each morning.