糖心破解版

Skip to main content
Markkula Center for Applied Ethics Homepage

How Bots and Humans Might Work to Stop Harassment

Craig Newmark

Craig Newmark

Craig Newmark

This article was originally published in  on May 24, 2017.

There鈥檙e some really bad people who harass journalists. Women and minorities, especially, are the . Yet many newsrooms do little or nothing to attempt to protect their employees, or to think through how journalists and organizations should respond when harassment occurs.

Harassers and trolls have multiple motivations, often simple racism or misogyny, or in support of misinformation, or to suppress law enforcement or intelligence operations. Frequently, what appears to be multiple harassers are actually , Twitter bots, or multiple accounts operated by single individuals.

Sustained harassment can do some serious psychological damage, and I speak from personal experience. Outright intimidation is a related problem, suppressing the delivery of trustworthy news鈥攖he kind of news reporting that is vital to democratic governance.

The usual solution is to ignore trolls and harassers, but they can be persistent, and they often game the system successfully. You can mute or block a harasser on Twitter or Facebook, but it's easy enough for them to create a new account in most systems.

If you're knowledgeable in Internet forensics, you can sometimes trace a harasser鈥檚 account, and 鈥溾 them鈥攖hat is, post personally identifiable information as a deterrent. However, that really needs to be done in a manner consistent with site terms and conditions, maybe working with their trust and safety team. (Seriously, this is a major ethical and legal issue.)

Or, if you have a thick skin, you can respond with 鈥渟hock and awe,鈥 that is, with a brutal response in turn. Or, you can reason with them, which has . Retaliation against professionals, however, often backfires. They鈥檙e usually well-funded, without conscience, and are often very smart.

One method to address rampant harassment would be for news organizations to work with their security departments to evaluate the worst abuse, and do risk assessment. Sometimes threats are only threats鈥攂ut sometimes they鈥檙e serious. News organizations might share information regarding harassers, while respecting the rights of the accused and the terms and conditions of the organizations involved. There are also serious legal and ethical considerations here, to be considered.

Perhaps news orgs could enlist subscribers or other friends to bring harassment to light. Participants in such a system could simply tweet to the harasser an empty message, or with a designated hashtag, withdrawing approval while avoiding bringing attention to the actual harassment. The empty message might communicate a lot, in zero words.

I believe that the targets of harassment need help from platforms, and here鈥檚 the start of a way that could happen. I鈥檓 attempting to balance fairness with preventing harassers from gaming the system, so please consider this only a start.

Let鈥檚 use Twitter for this thought experiment, mostly because I understand it, and they鈥檙e genuinely trying to figure this out.

Suppose you鈥檙e a reporter who is a verified user, and you get a harassing tweet. You鈥檇 do a quote retweet to a specific account as a way to report the harassment. That specific account would be a bot which could begin to analyze the harassing tweet. The bot would enter the email and IP addresses of the tweet into a database.

Periodically, a process would run to see if there鈥檚 a pattern of harassment from that IP or email address, and if so, that account could be suspended and contacted.

While most journalists would find it easy to do such a retweet, perhaps this should be more open to all, which could involve a harassment report button or option in the menu on a particularly tweet. (There鈥檚 a button and other means within the Twitter UI to do some of this, and Twitter has signaled that more鈥檚 on the way.)

News orgs also need to step up to protect their own reporters.

They could enlist subscribers or other friends to bring harassment to light. Participants in such a system could simply send an automated tweet to the harasser that says 鈥淭his account has been reported for harassment and is being monitored by the community.鈥 This type of system publicly tells harassers 鈥測ou are on notice鈥 and the community is watching. Note that this might be easily gamed, unless from verified journalists or similar.

Since this is a significant job, social networks may want to test organizing a volunteer community鈥攍ike the one Wikipedia has鈥攖o help monitor the reports and accounts. Social networks can take it a step further and have trained members of the community respond to some of the harassers (not the bots) to discuss why the tweets were reported for harassment. Teaching moments are important to address harassment. If the social media account user continues the harassment, they get permanently banned from the social network. Some online games have adapted a similar strategy and have had some success with this approach.

I realize these ideas are fairly half-baked; the devil鈥檚 in the details. I鈥檓 also omitting a lot of detail, since that deeply detailed info could help harassers game this or other systems. In any case, we need to start, somewhere. Harassment and intimidation of reporters is a real problem, with real consequences for democracy.

Craig Newmark is an Internet entrepreneur best known as the founder of Craigslist.

This article is part of The Democracy Project, a collaboration with The Atlantic.

Jun 13, 2017
--