Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

Elon Musk, Amazon Create Artificial Intelligence Research Center

December 11, 2015, 11:24 PM UTC
Artificial intelligence.
Photograph by Agliolo Mike — Getty Images/Photo Researchers RM

Several technology giants have banded together to invest $1 billion to research artificial intelligence, an increasingly important technology used in self-driving cars, facial recognition, and online advertising.

The effort, announced Friday, is designed “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” said a blog post by the non-profit research company. Any advances will be made publicly available to help to spur innovation in the field.

The new company, OpenAI, is backed by donations from tech luminaries including Elon Musk, the CEO of Tesla and CEO of SpaceX; LinkedIn co-founder Reid Hoffman; PayPal co-founder and investor Peter Thiel; and Sam Altman and Jessica Livingston from the startup accelerator YCombinator. Corporate backers include Amazon’s cloud computing arm, Amazon Web Services, and Infosys.

The company expects to spend only a tiny fraction of the $1 billion donated over the next few years, according to its announcement.

OpenAI’s research director is Ilya Sutskever, a former Google research scientist. Greg Brockman, former chief technology officers for digital payments processor Stripe, will serve in the same role for the non-profit. Other members of the group are a who’s who in artificial intelligence. It also includes Altman and Musk as the company’s co-chairs.

Currently, much of artificial intelligence research takes place within companies like Google (GOOG), Facebook (FB), Microsoft (MSFT), Baidu and at universities that are closely affiliated with commercial interests. While, the work is often shared via research papers, it is ultimately guided by commercial interests.

Apparently, the creators of OpenAI are concerned by the practice. From the OpenAI blog/manifesto on Friday:

Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.

We’re hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.

As someone who has covered this space and spoken with the researchers building the neural networks that are then applied to something as mundane as tagging photos or as grand as recognizing cancer cells, it’s great to see an organization like this exist. But I question the lack of participation from people outside the tech bubble. I’d love to see people from Washington, D.C. involved, or social justice activists who believe that computer algorithms have already unfairly affected their communities by coming up with assumptions that are built into current software.

As we built artificially intelligent machines, we’re still training them with biases and goals that can lead to outcomes that unconsciously reflect a world that their programmers want to see, not the world we actually live in or even want to live in. To solve for that, any Open AI effort needs to be both open and look beyond the narrow confines of the tech world.

Update: It looks like there is another group professing similar goals that was created Friday in London. The group looks to be bringing in a wide variety of participants, calling for activists, lawyers, technologists and philosophers to join the discussion. The Foundation for Responsible Robotics was created by Aimee van Wynsberghe, an Assistant Professor in Ethics of Technology at University of Twente’s Department of Philosophy and Noel Sharkey, robotics professor at the University of Sheffield, according to the International Business Times.

The article about the creation of that group cites the use of artificially intelligent robots used in elder care and child care in Europe and Japan without considering the ethics surrounding privacy or how it may affect human interactions as an example of where AI and robots might be moving faster than our discussions about their use. Perhaps the two organizations can talk.

For more on artificial intelligence, watch this Fortune video:

Make sure to sign up for Data Sheet, Fortune’s daily newsletter about the business of technology.