Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

How do you govern a (hidden, fluid and amoral) algorithm?

March 19, 2015, 12:02 AM UTC
Facebook Announces New Launcher Service For Android Phones
MENLO PARK, CA - APRIL 04: A Facebook employee holds a laptop with a "like" sticker on it during an event at Facebook headquarters during an event at Facebook headquarters on April 4, 2013 in Menlo Park, California. Facebook CEO Mark Zuckerberg announced a new product for Android called Facebook Home as well as the new HTC First phone that will feature the new software. (Photo by Justin Sullivan/Getty Images)
Photograph by Justin Sullivan — Getty Images

Computers are everywhere. In our thermostats, in our cars and, of course, on our mobile devices. Increasingly, the algorithms that dictate how those computers behave has subtle influence on our physical and psychological well-being. But how should society oversee them, given that they are generally kept secret, are liable to change and are subject to the biases of their programmers?

For example, my Nest thermostat is optimized to conserve energy while trying to maintain my comfort. On occasion, however, it sets the temperature too low in the winter or too high in the summer — leading me to “fight the Nest.” A deeper example can be found in a recently-published study that looked at how people engaged with Facebook’s News Feed algorithm, showing that 62.5% of participants were not aware of the algorithm’s existence.

The News Feed algorithm dictates which posts you might see in your Facebook (FB) stream of news from friends and advertisers. The study installed software called FeedVis, which let participants see all the posts from their friends and compared it to the posts that Facebook’s algorithm showed them. The results highlighted some depressing links between the algorithm and how people viewed their real-world relationships. From the study, which was conducted in 2014 with a very small sample size of 40 participants:

Importantly, some participants disclosed that they had previously made inferences about their personal relationships based on the algorithm output in Facebook’s default News Feed view. For instance, participants mistakenly believed that their friends intentionally chose not to show them stories because they were not interpersonally close enough. They were surprised to learn via FeedVis that that those hidden stories were likely removed by Facebook: “I have never seen her post anything! And I always assumed that I wasn’t really that close to that person, so that’s fine. What the hell?!”

Two of the study’s authors spoke on a panel at SXSW in Austin, Texas, where they touched on the topic of biased algorithms that placed ads for arrest records next to traditional African-American names in search engines, and more generic phone and address look-ups next to other names. They also discussed poorly-designed algorithms such as the one Facebook uses to ferret out people using fake names (which apparently is still flagging accounts belonging to Native Americans, likely due to the cultural biases of its original programmers).

This isn’t just a social networking or connected device problem. The Obama Administration is concerned enough about the risk of discrimination by algorithm that its 2016 fiscal year budget includes $17 million for data science pilots at the National Science Foundation that would study related issues. Information gleaned from these pilots would help other federal big data research projects develop the expertise needed to address issues like discrimination via algorithm. In February, the White House also promised that The White House Domestic Policy Council and the Office of Science and Technology Policy will issue a follow-up report further exploring the implications of big data technologies for discrimination and civil rights.

But what do we do about bad algorithms? Karrie Karahalios, a computer science professor at the University of Illinois Urbana Champaign, ipromotes the establishment of algorithmic literacy efforts. As one of the authors of the FeedVis paper, she noted that 83% of the participants in the study changed their behavior once they knew about the algorithm, and most liked using Facebook more once they had their newfound knowledge.

Her co-panelist at SXSW and co-author, The University of Michigan’s Christian Sandvig, disagreed. Given the number of algorithms in our daily lives and the amount of critical thinking skills required to dissect how they might affect us, such an effort seemed both difficult and elitists.

Instead he proposed some kind of third-party non-profit or industry oversight committee to help ensure that proprietary algorithms meet certain standards. In the meantime, we’re going to have to hope that algorithms are kept in check by moral outrage and regular monitoring. That’s relatively cold comfort. Or maybe my Nest is just acting up again.

Watch more business news from Fortune: