Under fire for widespread abuse and misinformation on its service, Twitter argued on Tuesday that it’s taken aggressive steps to police what users post.
In the past year, the company says it has tripled the number of workers who monitor the site to 1,500. And it has added sci-fi technology to detect and remove objectionable content.
But executives acknowledged that there’s a lot more work to be done before the company can declare success.
“There’s no silver bullet to addressing it,” Kayvon Beykpour, product lead at Twitter, said during a presentation at the company’s San Francisco headquarters. “We have to think of every decision we make as a series of tradeoffs.”
On Tuesday, Twitter executives gave a run down to journalists about their latest efforts to combat the problem. The company wants to show that it’s serious about eliminating the widespread harassment on its service, which has festered for years despite repeated assurances by executives, including current CEO Jack Dorsey, that they would get the situation under control.
“A lot of these abuse and information integrity issues are not just policy and enforcement problems, they’re incentive problems,” said Beykpour, referring to the fact that users get more likes and retweets for inflammatory posts. “We have to address these things holistically.”
Twitter said its nearly 1,500 army of reviewers, a mix of employees and contractors, now work from nine hubs worldwide including in San Francisco, Dublin, Budapest, and Manila. A year ago, the company had only two such offices, staffed by only around 500 workers.
The reviewers are evaluated on how well they handle customer complaints instead of how many cases they plow through. It means that the reviewers feel they have more time to make good decisions and seek help from colleagues, said Donald Hicks, vice president of one of the groups that polices the service.
Twitter says its top priority is to proactively police its site rather than putting the burden on its users to report bad behavior. To do so, it is increasingly relying on machine learning, technology that automatically identifies objectionable content.
It’s a big shift in strategy from the past, when Twitter was reactive. If it identified a new kind of problem, it would merely create a new rule that supposedly addressed it without doing much else, said Del Harvey, vice president of trust and safety.
In a hopeful sign, Twitter said that its machine learning technology proactively recognizes 90% of tweets that involve the exploitation of children and terrorist activity, said David Gasca, senior director of product management for Twitter health. But overall, Twitter’s technology only flags 40% of all objectionable content.
The hardest kind of banned content to police is abusive tweets, Gasca said.
“It requires a lot more machine learning models to identify,” Gasca said. He continued by saying, “We have to create a number of models that basically score it on the likelihood that it could be abusive and then submit it for review.”
More must-read stories from Fortune:
—What you need to know about 8chan, the controversial site tied to the El Paso shooting
—Verizon’s unlimited plans are getting cheaper. Here’s what you should know
—What CEOs, bankers, and tech execs think about a coming recession
—How an alleged Amazon theft ring got the goods
—Boeing adds a second flight control computer to the 737 Max
Catch up with Data Sheet, Fortune‘s daily digest on the business of tech.