submit to reddit

By B.L. Ochman @whatsnext

Can new ideas combat online trolls?

Can publications keep trolls from making nasty comments? Is there a way to prevent rants by people who haven’t even read the article?

Some great ideas are being tried. At NRKbeta, the tech vertical of the Norwegian public broadcaster NRK, potential commenters have to take a three question multiple-choice quiz about the article before commenting. The goal? Make sure they have actually read the post before commenting.

I have often been surprised, and frustrated, when comments on my articles are (a) longer than my article; (b) take me to task for not mentioning something I have written about; (c) commented on the previous comments, which had nothing to do with my story; (d) are just plain nasty.

“We thought we should do our part to try and make sure that people are on the same page before they comment. If everyone can agree that this is what the article says, then they have a much better basis for commenting on it.” NRkbeta journalist Ståle Grut told Joseph Lichterman at Neiman Lab.

“We’re trying to establish a common ground for the debate,” NRKbeta editor Marius Arnesen said. “If you’re going to debate something, it’s important to know what’s in the article and what’s not in the article. [Otherwise], people just rant.”

NRKBeta is not alone in trying to build tools to curb troll’s comments. Last week, Google and Jigsaw launched Perspective, an early-stage technology that uses machine learning to help identify toxic comments.

How Perspective Works


Writing in Medium, Jared Cohen, President of Jigsaw, explained, “Perspective reviews comments and scores them based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers. Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.”

It will get better over time, as it is exposed to more comments on more sites. It’s currently being tested by The New York Times and The Economist.

Keeping the Trolls at Bay

There is also CivilComments, software platform by a startup which crowdsources comment moderation. Users have to moderate other comments before they can post one of their own. It’s being offered to news organizations.

The New York Times, The Washington Post, the Mozilla Foundation, and the Knight Foundation recently launched the Coral Project, an initiative to create open-source tools to help news orgs improve their on-site community.

Can these platforms keep the trolls at bay? Can they make online conversation more civil? Can they help human editors who now have to sift through thousands of comments on sites like The Times? Time will tell.