Since the dawn of the Internet, knowing (or perhaps more accurately, not knowing) who is on the other side of the screen was one of the biggest mysteries and thrills. In the early days of social networking and online forums, anonymous usernames were the norm and meant you could pretend to be whoever you wanted to be.
As exciting as this freedom was, problems quickly became apparent: predators of all kinds used this cloak of anonymity to attack unsuspecting victims, harass anyone they didn’t like or agree with, and spread misinformation without consequence.
For years, the conversation about moderation centered on two key pillars. First, what rules to write, what content is considered acceptable or prohibited, how do we define these terms, and who makes the final decision in the gray areas? And second, how to enforce them: how can we leverage both humans and artificial intelligence to find and flag inappropriate or even illegal content?
While these remain important elements to any moderation strategy, this approach only singles out bad actors after a breach. There is another equally critical tool in our arsenal that is not getting the attention it deserves: verification.
Most people think of verification as the “blue check mark,” a badge of honor bestowed on the elite and celebrities among us. However, verification is becoming an increasingly important tool in moderation efforts to combat negative issues like harassment and hate speech.
That blue check mark is more than just a signal showing who is important; it also confirms that a person is who they say they are, which is an incredibly powerful means of holding people accountable for their actions.
One of the biggest challenges facing social media platforms today is the explosion of fake accounts. Bots spread lies and misinformation like wildfire, and they spread faster than moderators can ban them.
That’s why Instagram began implementing new verification measures last year to combat this problem. By verifying users’ real identities, Instagram said it will “be able to better understand when accounts are trying to deceive their followers, hold them accountable, and keep our community safe.”
It’s important to remember that verification is not a single tactic, but rather a collection of solutions that must be used dynamically in tandem to be effective. The urgency of implementing verification is also greater than simply stopping the spread of questionable content. It can also help companies ensure that they stay on the right side of the law.
In person, we detect increasingly sophisticated fraud attempts ranging from the use of celebrity photos and data to create accounts to bizarre ID photos and even the use of deepfakes to mimic a live selfie.
That’s why it’s critical for verification systems to consider multiple signals when verifying users, including actively collected customer information (such as a photo ID), passive signals (their IP address or browser fingerprint) and third-party data sources (such as phone numbers and email risk lists). By combining multiple data points, a valid but stolen ID will not get through the gates because signals such as location or behavioral patterns will generate a red flag that this user’s identity is likely fraudulent or, at the very least, warrants further investigation.
This type of holistic verification system will allow social and user-generated content platforms to not only deter and flag bad actors, but also prevent them from repeatedly logging into their platform with new usernames and emails, a common tactic of trolls and account abusers who were banned.
Beyond individual account abusers, a multi-signaling approach can help manage a possibly bigger problem for social media platforms: coordinated disinformation campaigns. Any problem involving groups of bad actors is like fighting a multi-headed snake: you cut off one head only to have two more grow back in its place.
However, slaying the beast is possible when you have a comprehensive verification system that can help bring out clusters of bad actors based on shared properties. While these groups will continue to seek new ways in, multifaceted verification that is tailored to the end user can help keep them from running rampant.
Historically, identity verification systems like Jumio or Trulioo were designed for specific industries, such as financial services. But we are starting to see an increase in demand for industry-independent solutions like persona to keep up with these new and emerging use cases for verification. Almost every industry that operates online can benefit from verification, even those such as social media, where there is not necessarily a financial transaction to protect.
It is not a question of whether verification will become part of the solution for challenges such as moderation, but rather when. The technology and tools exist today, and it’s up to social media platforms to decide that it’s time to make this a priority.