Facebook and its family of apps have long grappled with the issue of how to better manage, and eradicate, harassment on its platform, turning to both algorithms and humans in their efforts to better address the problem. In the latest development, Instagram announced some new tools of its own.
First, it introduced a new way for people to better protect themselves from harassment in their direct messages, specifically in message requests through a new set of words, phrases and emojis that could indicate abusive content, which will also include common misspellings of those key terms, sometimes used to try to evade filters. Second, it gives users the ability to proactively block people even if they try to communicate with the user in question through a new account.
The account blocking feature will go live globally in the coming weeks, Instagram said, and confirmed that the feature to filter abusive direct messages will begin rolling out in the U.K., France, Germany, Ireland, Canada, Australia and New Zealand in a few weeks before becoming available in more countries over the next few months.
Notably, these features are only being rolled out on Instagram, not Messenger or WhatsApp, Facebook’s other two hugely popular apps that enable direct messaging. The spokesperson confirmed that Facebook expects to bring it to Messenger later this year (no word on WhatsApp). Instagram and others released periodic updates on individual apps before considering how to roll it out more broadly.
Instagram said the feature to filter direct messages for abusive content will be based on a list of words and emojis that Facebook compiles with the help of anti-discrimination and anti-harassment organizations, along with terms and emojis that could be added. And to be clear, it should be proactively enabled, rather than available by default.
Why, more user license, apparently, and keep conversations private if users so desire. “We want to respect people’s privacy and give people control over their experiences in the way that works best for them,” a spokesperson said, noting that this is similar to how their comment filters also work. It will be under Settings> Privacy> Hidden Words for those who want to enable the control.
There are a number of third-party services in the wild that are now creating content moderation tools that detect harassment and hate speech, including the likes of Sentropy and Hive, but what was interesting is that the biggest tech companies so far opted to build these tools themselves.
The system is fully automated, although Facebook noted that it reviews any content that is reported. While it does not store the data from those interactions, it confirmed that it will use reported words to continue to build its larger database of terms that will trigger content blocking and then removal, blocking and reporting of the people sending it.
On the topic of those people, it was a long time since Facebook started getting smarter in the way it handles the fact that people with bad intentions wasted no time in creating multiple accounts to take over when their main profiles are blocked. People were aggravated by this loophole since direct messages exist, even though Facebook’s harassment policies already prohibited people from repeatedly contacting someone who doesn’t want to hear from them, and the company highlighted recidivism, which as Facebook describes it, means “if someone’s account is disabled for violating our rules, we would delete any new accounts they create every time we hear about it.”
The company’s approach to direct messages was sort of modeled after how other social networking companies developed them.
Essentially, they are open by default, with one inbox reserved for real contacts, but a second for anyone to contact you. While some people simply ignore that second box altogether, the nature of how Instagram works and is built is for more, not less, contact with others, and that means people will use those second inboxes for their direct messages more than they might, say, delve into their spam inboxes in email.
The larger problem remains a complex game, however, and it’s not just its users who are asking for more help in solving it. As Facebook continues to come under the scrutinizing eye of regulators, harassment, and better management of it, emerged as a key area that will need to be solved before others solve it.