Community Management

From Spam to Profanity: How to Manage Abusive Content on Your Online Community

Making use of automated moderation and threat protection tools Nowadays, content reporting and community moderation is — continue reading
Posted by Taco Potze
May 26, 2021

Community managers know all too well how draining it is to manage and monitor user-generated content. You can spend hours browsing through comments, images, events and groups to make sure that everything is above board. And when your community gets infiltrated by automated bots posting spam, then the problem really gets out of control!

Luckily, a healthy mix between automated moderation and manual community reporting can help keep you – and your team – sane.

Making use of automated moderation and threat protection tools

Nowadays, content reporting and community moderation is not just conducted by people scrolling through streams of content or double-checking that users are actually real people. Some larger companies receive more than 60,000 content reports a day. I wouldn’t want to be the manager that has to scroll through that! More and more, automated systems and artificial intelligence algorithms are used to help out.

Use spam protection software for your community

Automation can help community owners and managers put up a number of barriers to protect a community. Perhaps the most basic examples are the well-known Captcha or reCaptcha forms. I’m sure you’ve had to prove countless times that you are not a robot by clicking on a series of image tiles that contain a bicycle or traffic light. Barriers like these eliminate the most obvious spam bots from enrolling and accessing your community.

But there are more advanced tools to use as well. Open Social’s Spam Protection extension uses CleanTalk’s software to block users from suspicious IP addresses or blocked locations. CleanTalk judges user or signup behavior according to a number of metrics that will indicate whether they are trustworthy or not. This is your first line of defense: if bad actors and bots can’t access your community – they can’t post spam or abusive content.

Add profanity filters and input checks

The other tool in Open Social’s Spam Protection extension is WebPurify. WebPurify is a modern foolproof solution for monitoring abusive user-generated content. Powered by AI technology it lets you block profanities with smart filters and keep offensive images off your platform with live image moderation.

Using WebPurify, you can even add a custom list of banned words and automatically block offensive content from your community. This kind of content moderation is known as input check.

However, even the most sophisticated automation tools are not perfect!

For example:

The last example really shows the nuances that are needed to do content moderation. Because I understand how the algorithm perceived the image as upsetting. But the algorithm didn’t take into account the historical importance of the photo and that it might be relevant (and acceptable) in the context of a news article about human rights. You still need human moderators to make complex decisions in the grey areas.

Your community members are your most reliable and relevant moderators

The social technology industry has long recognized the potential to use members to moderate content on social platforms. Instead of relying exclusively on automation, your members can be enabled to report content themselves using a content reporting feature.

What makes this extremely powerful is that you get actual human oversight, but you also empower your members to take ownership of their community. Giving them input in content moderation helps them feel like they have a say in what goes, and what doesn’t.

Create useful guidelines

Your members will need clear and easy-to-find community guidelines and rules to adequately handle and judge abusive content in your community.

This guide will help members to recognize abusive content by clearly defining them and highlight the importance of their role in keeping this community safe.

Here’s an introduction we prepared for you to use in your guidelines:

As community managers, we cannot be present for every interaction and upload. We rely on our members to report abusive and potentially harmful content. We may make exceptions to these policies based on artistic, educational, or documentary consideration. But there are a few clear lines-we have a zero-tolerance policy toward: …

Naturally, there are some standard definitions of what abusive content contains:

  • Threatening or harassing other users;
  • Sexualized content of another person without consent;
  • Content that exploits or abuses children;
  • Content that supports or promotes violent behaviors;
  • Discrimination of others;
  • Defrauding and impersonation;
  • Invasion of privacy;
  • Spamming;

However, you may want to add further definitions that specifically apply to your community.

For example, plagiarism of other members in education or ideation communities. Find further instructions from us on how to set up community guidelines.

An effective reporting process is crucial

With Open Social, members can report on a number of types of content: posts, topics, comments, events, images, etc. Making the reporting function clearly visible and easily accessible is key to empower your members. If the reporting process is not clear, or difficult to access, then there is no point in having it at all!

The content reporting feature in Open Social allows your members to report inappropriate, abusive or spam content with only two clicks.

Content moderation - reporting

As a community member, you can mark any content you find abusive by:

  • Selecting the content that you wish to report;
  • Selecting a report from the drop-down menu in the corner;
  • Choose a reason why this content is a violation and, if requested, an explanation.

Triggering automatic action

With Open Social you can automatically set it so that reported content gets unpublished. This is if you want to err on the side of caution. You can always reinstate the content if it is not violating your community guidelines! But better to remove content quickly.

Content moderation - settings

As a community member, you can mark any content you find abusive by:

  • Selecting the content that you wish to report;
  • Selecting a report from the drop-down menu in the corner;
  • Choose a reason why this content is a violation and, if requested, an explanation.

Triggering automatic action

With Open Social you can automatically set it so that reported content gets unpublished. This is if you want to err on the side of caution. You can always reinstate the content if it is not violating your community guidelines! But better to remove content quickly.

Content moderation - Report dashboard

Defining strict repercussions

While you might know what to do with the offensive content itself, how do you handle the community members who posted offensive content? Of course, it is easy to implement a strict course of action if the member is a bad actor or bot that got through the automated spam protection. But what about a trusted community member who has offended (or has been a repeat offender)?

Use your community guidelines to define the consequences for first-time and repeat offenders. These repercussions need to be strict so that abusive behavior is (hopefully) avoided upfront.

Typically, first-time offenders are let off with a warning. You may even decide to ban them from posting content for a period of time. On the other hand, repeat offenders are usually expelled from the community. Don’t be afraid to remove these members immediately. In the end, you want to create a place that’s safe for your members. This should be your first priority.

The consequences may also differ according to the type of content that was posted. That’s why you can divide content into:

  • Zero-tolerance content: content that results in immediate expulsion.
  • Abusive content: first-time offenders are let off with a warning.

As a community manager, it will be your job to decide how to handle incoming abusive content and to make the final decision on the repercussions.

Creating a multi-layered barrier against spam and abusive content

The best defense is always to have a multi-layered approach. This means that you use a number of tools to keep your community safe and secure. If a bad actor makes it through one layer, then there is another layer to still protect you.

Spam Protection - Multiple layers of protection

Automation keeps your community as safe and clean as possible, but if all else fails, then your community members can easily report content for manual moderation and possible removal. By enabling the content reporting feature, you don’t only ensure a much healthier community for your members, but you also increase the engagement of the current members.

If you want to find out how to add Spam protection to your Open Social community, you can find more information here: Spam Protection Extension

You can also download our 10 Steps For Your First Year of Community Building guide to learn more about community guidelines, content moderation and other essential elements to setting up and running a successful online community. 

To find out more about what Open Social can do for your organization, you can easily schedule a one-on-one demo and we can show you all the features (and also what Open Social would look like branded for your organization!)

Live demo - Open Social  

Schedule a free Open Social demo

In this article we discuss

Related articles