After what happened on Capitol Hill in the United States, Facebook announced measures to limit the circulation of certain messages, but users continued to receive ads for camouflage clothing or military elements, as they came across extremist content. In reality, the company could have limited the organic distribution of such content, but not its ads.
At a time of much debate about the role of social media, advertisers are faced with the challenge of controlling their brand on the Web. Programmatic advertising has exposed new dangers and complicated situations, such as the case of YouTube ads. Different brands discovered that their ads were appearing on extremist channels, forcing Google and the video platform to create more control tools.
For these reasons, Facebook decided not to wait for such problems and launched an update that gives advertisers more control over where their ads appear. A new Brand Safety tool will allow blocking content associated with topics that are considered dangerous to reputation. Blocking will be done by categories such as “crime and tragedy,” “news and politics” or “social issues.” The solution is part of the requirements to comply with the Global Alliance for Responsible Media (GARM) standards.
For now the tools do not seem to be available to advertisers on a general level, but still need some time for testing and trials. The main problem with all of this is that the keywords are very broad and cross-categories such as “news and politics” leave out many topics. The media face this challenge every time advertisers block a keyword as problematic. The new tool, within Facebook, will need to understand and differentiate which opportunities are legitimate to serve advertising in safe contexts.