Immediately following the launch of a new Q&A format for creators answering viewer questions, TikTok announced that it is rolling out new commenting features. Creators will now be able to control which comments can be posted on their content, before those comments are posted. Another new addition, aimed at users who are commenting, will display a pop-up box asking the user to reconsider posting a comment that may be inappropriate or objectionable.
TikTok says the goal of the new features is to maintain a positive and supportive environment where people can focus on being creative and finding community. Instead of reactively removing offensive comments, creators who choose to use the new “Filter All Comments” feature will be able to choose which comments will appear on their videos. When enabled, they will need to review each comment individually for approval using a new management tool.
This feature builds on TikTok’s existing comment controls, which allow creators to filter out spam and other offensive comments or filter by keywords, similar to other social apps like Instagram.
But filtering all comments means that comments won’t even be posted unless the creator approves them. This gives creators full control over their presence on the platform and could prevent harassment and abuse. It could also allow creators to get away with spreading false information without any backlash, or make it look like they are more liked than they really are. That could be problematic, especially since brands making decisions about which creators to work with to promote their products could get the wrong impression about a user’s likeability.
Instead, the other feature will push users to reconsider posting negative comments, i.e., those that appear to be bullying or inappropriate. It will also remind users of the TikTok Community Guidelines and allow them to edit their comments before sharing them.
These types of “nudges” help slow people down and give them time to pause and think about what they are saying, rather than reacting so quickly. TikTok is already using nudges to ask users whether or not they want to share unsubstantiated claims that fact-checkers can’t affirm, in an attempt to curb the spread of misinformation.
It took years for other social networks to add messages that ask users to stop and think before posting. Instagram, for example, launched in 2010, but it took nearly a decade before it decided to test a feature that prompted users to reconsider before posting offensive comments. Meanwhile, Twitter said last month that it was running another test that asks users to reconsider harmful replies and was running variations of this same test for nearly a year.
Social networks were hesitant to incorporate more messages like this into their platforms, even though they demonstrated a strong ability to influence users’ actions. When Twitter started asking users to read articles linked in a tweet before retweeting, for example, users opened those articles 40% more often. But most of the time, networks opt to downgrade or hide negative comments, as Instagram does with “View hidden comments” or Twitter does with “Hide replies.”
TikTok says it is consulting with industry partners to develop its new policies and features and also announced a partnership with the Cyberbullying Research Center (CRC), which develops research on cyberbullying and online abuse and misuse. The company says it will collaborate with CRC to develop other initiatives in the future to help promote a positive environment.