Google launches new Chrome extension to hide toxic comments on the internet

The Chrome extension will block toxic comments online 
Amelia Heathman13 March 2019

Social media platforms are regularly grappling with how to deal with toxic and abusive comments online – whether it’s Instagram allowing users to block certain keywords or Twitter inventing an entirely new app to facilitate better conversation.

It’s no easy feat: last year Amnesty International and Element AI revealed that women are abused on Twitter every 30 seconds.

Even if it is not you personally receiving abusive messages, you may see hate-filled comments below the line on a news story or on Twitter. Overall, it can make the internet a nasty place to be sometimes.

Google thinks it may have a way to remove these toxic messages. One of its subsidiaries, named Jigsaw, has been using machine learning to spot abuse and harassment and it is rolling out a new Chrome extension that uses this knowledge to block toxicity online.

Tune is a Chrome extension that Jigsaw says “helps people control the volume of the conversations they see.”

Simply add it on to your Chrome browser, sign in with a Gmail account, and then set how much toxicity you want to see in comments on the internet. It works on platforms such as Facebook, Twitter, YouTube, Reddit and Disqus.

Presumably, Tune has skipped Instagram given that people mainly use the platform via its app, and not via a browser.

There’s a “zen mode” which allows you to skip comments completely, or you can turn up the volume to see everything. The volume can be set in between to customise this toxicity, for instance, attacks, insults and profanity.

Tune is part of Jigsaw’s Conversation-AI research project that it has been running for the past few years into analysing abusive messages. Jigsaw has worked with The New York Times and the Wikimedia Foundation, analysing the messages these platforms received, alongside data around which comments were flagged as inappropriate. All this information has been used to train its algorithms.

As Jigsaw explains, the machine learning behind Tune is experimental, so it will miss some toxic comments and incorrectly hide other non-toxic ones. It relies on user feedback in this way to help its team improve the algorithms.

It is also not supposed to be a solution to abusive comments on the internet, particularly if you’re receiving them personally. Instead, Jigsaw hopes experiments like Tune will show people how tech like machine learning “can create new ways to empower people as they read discussions online.”

As a person who spends a lot of time online, I’m interested to see how this works out. Can Tune help make the internet less of a hateful place? It can only try.

Create a FREE account to continue reading

eros

Registration is a free and easy way to support our journalism.

Join our community where you can: comment on stories; sign up to newsletters; enter competitions and access content on our app.

Your email address

Must be at least 6 characters, include an upper and lower case character and a number

You must be at least 18 years old to create an account

* Required fields

Already have an account? SIGN IN

By clicking Create Account you confirm that your data has been entered correctly and you have read and agree to our Terms of use , Cookie policy and Privacy policy .

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged in