By Ellen McGirt
August 30, 2016

In a rare bit of good news, a technology company has managed to cut the number of racist posts on their platform by 75%. Their secret? Empathy. (And also months of hard work, lots of prototyping and getting expert input.)

Nextdoor is a private social network for neighborhoods, where people talk about local stuff, like lost pets and the quest for a reliable contractor. And since good neighbors keep an eye on things, they often report criminal or suspicious activity in their “Crime and Safety” forums. That otherwise unique benefit was being spoiled, as revealed in a story last year on Fusion, by a growing number of posts that were based on racial profiling, like neighbors peering from windows and identifying “sketchy characters” who were actually somebody’s black or brown guest.

Nextdoor CEO Nirav Tolia was shocked by the story. “I hadn’t seen it in my own neighborhood’s Nextdoor,” he says. But he and his team took it seriously. “These posts were less than 1% of 1% of all our posts,” he says, though that could certainly have changed. “We made the decision that the damage that could be done by any of these posts was just too much, and we couldn’t dismiss it.” Besides, he says, “We knew these posts were not who we wanted to be as a company.”

They also had to navigate the dicey territory of asking someone, who was already nervous about something, to consider their behavior without shaming them or implying that they are racist.

That’s where empathy played a key role. They relied on the research of Stanford professor Jennifer Eberhardt, who studies bias in the criminal justice system. And the team tapped experts like the Oakland Police Department and Neighbors for Racial Justice for language on bringing up race effectively.

And then they used technology to frame a polite conversation with a user who thought something bad might be happening. “We tried to create decision points,” Tolia says, “to get people to stop and think as they’re observing people, to cut down on implicit bias.”

Eventually the team came up with a series of tips that gently prompt users to think more deeply about what actually is suspicious activity. Ask yourself: Is what I saw actually suspicious, especially if I take race or ethnicity out of the equation?

These tips helped preserve the dignity of the poster. “You don’t accuse, you try to defuse, and approach people before they’ve made the decision to profile,” says Tolia.

After many iterations, they were able to reduce racial profiling by 75% in test markets. They’ve now rolled out the new interface to all 110,000 neighborhoods they serve.

Fusion has an excellent follow-up story that details how the company tested various solutions.

But when you consider how segregated American neighborhoods tend to be, it’s not surprising that profiling became an issue. We tend to know people who only look like us. And that means that profiling behavior becomes an effortless part of everything we do.

I asked Tolia for his best advice for other leaders facing their version of a race-based issue (cc: Airbnb) which I’ve boiled down to a four step process: Name the problem, even if it’s uncomfortable; create a hypothesis to test solutions; tap diverse experts for help; and get serious about testing and iteration.

But at the core is empathy, for everyone involved. Tolia grew up in Odessa, Texas, the first generation American son of Indian immigrants. “To my parents, I was an American,” he says. “Outside my house, I was Indian.” That cognitive dissonance of being a threatening ‘other’ was not lost on him—or his diverse founding team—when they made the issue a priority

“As embedded and nasty as racism can be, it can be overcome,” says Tolia. But there is no magic bullet. “It will take a thousand initiatives, from lots of places,” he says. “We want to be putting points on the board as just one.”

SPONSORED FINANCIAL CONTENT

You May Like