Twitter is testing a decidedly simple strategy to stop widespread bullying and abusive comments on its service— showing users its rules.
CEO Jack Dorsey said Friday that Twitter was trying the tactic to “diminish abuse,” which has plagued Twitter’s users for years despite the company’s repeated promises to clean up the problem.
The new study will be led by outside researchers including the Dangerous Speech Project, a research group that studies how public speech could incite bad and even violent behavior. In a post announcing the new test, Susan Benesch and J. Nathan Matias, academic researchers who will help conduct the study, explained that it will likely involve showing Twitter users unspecified rules about proper behavior that could lead to reduced harassment and the number of vile comments.
“Social norms, which are people’s beliefs about what institutions and other people consider acceptable behavior, powerfully influence what people do and don’t do,” the pair wrote.
As an example, they cited outside research and “early evidence” from a previous study Matias conducted on Internet messaging board Reddit that involved showing readers of Reddit’s “r/science” forum rules for commenting. Some of those included warnings like “no abusive, offensive, or spam comments,” and “no personal anecdotes.”
Matias’ Reddit study, conducted over 29 days during which he and Reddit moderators screened 2,214 discussions on the science forum, found that by making rules easy to see (via “sticky comments,” as they are called on Reddit), Reddit users were 7.3% “more likely to follow the rules.”
Still, Matias concedes in his Reddit study that there are still many unknowns. For example, it’s unclear if posting rules to other Reddit forums like politics or video games, which can be breeding grounds for controversial comments, would result in similar positive results.
Nevertheless, it seems that Twitter will try something similar to the Reddit study.
The researchers said they wanted to announce the study before it begins because they want “to be as transparent as possible.” However, they declined to share the details about the study because doing so may jeopardize “the integrity of the results.”
Get Data Sheet, Fortune’s technology newsletter.
Twitter did not say when the study will conclude, but the researchers said they would publish their findings and post their methodology on a public research website so that others can try to replicate it to confirm their results.
As for the belief that Twitter should simply delete the accounts of people who continually post racist, abusive, or otherwise hostile comments, the researchers said doing so wouldn’t stop others from posting similar hateful comments.
“It’s like recalling unsafe foods without preventing new food from being contaminated and sold — or towing away crashed cars without trying to make new cars safer,” the researchers wrote. “Abuse can’t be solved by any single method, since it’s posted in so many forms, by a wide variety of people, and for many reasons.”