A nonprofit referred to as WAM invested three main fielding harassment reports to Twitter. This is what the learned.

You are watching: How many reports to get banned on twitter


*

Last year, a two-person-large nonprofit advocacy organization called Women, Action, & the Media (WAM!) announced it would collaborate v Twitter to address online harassment. You may have heard around this: through WAM’s count, the notice generated much more than 204 story in 21 countries.


It was extensively misunderstood together a significant partnership with Twitter; however, both political parties agree it was actually much more of one experiment. For three weeks in November, WAM became an “authorized reporter” for Twitter, which expected that the group could identify and report abusive contents on instead of of other people. It embraced reports that harassment v an user interface on the website, increased reports it believed had merit to Twitter, and used the possibility to far better understand virtual harassment reports and also how the social media communication responds.

Throughout the experiment, WAM reviewers got 811 reports the harassment. Lock escalated 161 of those reports to Twitter, which responded by suspending 70 accounts, handing the end 18 warnings, and also deleting one account.

*

This week, WAM exit its analysis of this experiment, which gives a rare picture into the abuse civilization report to Twitter and how the society network responds to it.


Here’s what WAM learned:

The bulk of people who reported harassment did for this reason on instead of of someone else. about 57% the the reports the WAM received came from either bystanders that witnessed someone rather being harassed and reported it, or native delegates like an attorney or household member that reported harassment on instead of of the person being harassed. Twitter adjusted its policies to allow bystander reports for impersonation and also doxxing in February.

Most people who reported harassment had been harassed before. 67% the them said they had notified Twitter at least once about harassment.


Gamergate made up only a tiny percentage the reports of digital harassment. despite the Gamergate debate has been one the most visible stories around online harassment in the tendency media end the previous year or two, only around 12% of the 512 alleged harassing accounts reported come WAM could be linked to it.

Some world reporting harassment were also reported for harassing others. This happened in 27 cases. “In some cases, receivers of harassment may also be engaging in activity that can constitute harassment,” the report reasons. “In various other cases, receivers the harassment might be topic to ‘false flagging’ campaigns that attempt to silence the harassment receiver through poor faith reports.”

Twitter took activity in much more than fifty percent the cases. amongst the 161 reports WAM referred to Twitter, the firm took action 55% of the time by deleting, suspending, or warning the reported accounts.


Twitter to be unlikely come delete one offending account. Twitter only deleted one account in solution to the 161 reports. It was much more likely to suspend the account (which it did 43% of the time) or to deliver a warning (11% of the time).

*

Twitter did no favor much more established accounts. WAM uncovered no relationship between the period of one account or its variety of followers and Twitter’s actions.

The 811 reports of digital harassment that WAM gathered room a fairly small sample size, and also a self-selecting one, at that. Yet with a dearth of info coming indigenous Twitter and also other an innovation platforms about how they manage harassment, it’s much better than nothing. The experiment, at the least, enabled WAM to point out a couple of places where Twitter might improve that processes, including:


Finding a much better way to handle doxxing. The practice of publishing an individual information choose phone numbers and also addresses was the second-most reported kind of harassment, 2nd to hate speech. Twitter only took action in 7 of the 20 reported doxxing situations (35%), as compared to the 60% action rate for hate speech. The society network explicitly banned doxxing in March, yet the difficulty may be that countless harassers that post an individual information ~ above Twitter remove it before the company has time to act. The information has still to be released, and the damages is quiet done, but the evidence is (sort of) gone.

Building far better reporting tools. Twitter currently requires world who report abuse top top its communication to provide URLs to harassing tweets. If a person is being harassed ~ above Twitter, but not v a tweet—say, by a pornographic profile picture or a username—there’s no method to report it. Twitter doesn’t expropriate screenshots together evidence, which way that harassment that is reported and then turned off can’t be reported.

Twitter currently knows it has work to carry out in taking care of online harassment. Twitter’s CEO, penis Costolo, synthetic it increase in a memo that leaked in February. “We suck at managing abuse and also trolls top top the platform and we’ve suck at it for years,” he said, promising, “We’re walking to start kicking these world off right and also left and also making sure that once they worry their man attacks, nobody hears them.”


We’ve never ever said, okay, we’re done. Our plans are set. Every little thing is perfect.

See more: How Many Times Did Trump Lie In The Debate, And 2 Times Clinton Was

At the point, the agency had already started making little changes in its online harassment policies. In December, because that instance, it had actually announced a an ext streamlined means to flag abusive tweets and allowed bystanders come report abuse. and also since then, it has announced much more policy changes. Later on in February, that announced that tripled the size of that abuse assistance team (Twitter would not to speak how huge that team is). In March, the firm officially prohibition revenge porn. It additionally improved its functions for reporting dangers to regulation enforcement and noted verified accounts a filter design to catch abusive tweets.

“We’ve never said, okay, we’re done. Our policies are set. Everything is perfect,” Twitter’s head that trust and safety, Del Harvey, told me in April. “We’ve constantly been saying we need to keep improving and iterating ~ above this stuff.”