Moderate Content for Brand Compliance

Tl;dr: Content Moderation is normal to a high level for social media but it's also required for brands and organizations to ensure brand compliance. Working with content moderation services may not satisfy your requirements as e.g. swimwear producers will have a lot of suggestive content flags from those providers. Picturepark lets you define smart lists of flagged terms and match those with suggestions from AI tagging. Flagged content will be highlighted and access can be restricted to owners.

As online content becomes easier and easier to produce, more of it is published and at ever greater speeds. With the explosion of internet content creation, the issue of content moderation has become a major issue.

Content that is unsafe, negative, or offensive is also easy to create; in great quantities too. Taking a look at the 2020 infographic from Domo demonstrates the sheer scale of content creation in the modern-day, illustrating “what happens in 1 minute on the internet”. In the time you’ve been reading this, there have already been 150,000 photos uploaded to Facebook.

At an organization, the issue with content that can be deemed negative or offensive is not necessarily that it has an impact upon you but you must always be vigilant of how it can be perceived by others. You don’t want to annoy or upset your customers, which may result in backlash via social media and lead to severe brand reputation damage.

Adopting clear content creation guidelines reduces the risk of offensive content being used at your organization but you still may have content that is not compliant with your brand. Brand compliance is essential. Your brand must be authentic yet consistent, reflected in your visual, social content, and product content. That means everything you publish.

Why do we Need Content Moderation?

Any content should be checked according to your brand compliance guidelines, against offensiveness and negativity. This is an enormous amount of work assuming you are delivering content to various channels on a daily or weekly basis. Do you have the time and resources to manually check all your content always in detail?

A key example involves a medium that connects your company to your customers: where all kinds of content can be posted in mere seconds: social media. Social Media Managers may just consume approved content without checking every detail of an image or especially every detail of a video, as the latter is just so time-consuming.

How to Effectively Moderate Content?

There are several moderation services out there, some accessible purely via API. You can use Amazon Moderations APIs or Azure Moderation API and others. Using these tools, you will get simple true or false results: signifying that content is or is not offensive, based on specified parameters. After all, what classifies as negative content is subjective. You can adapt those parameters yourself but what your customers find offensive may be completely different.

There are common categorizations like explicit nudity, suggestive content (wearing swimwear), violence, or visually disturbing content. As our customer Arena produces swimwear they would be a good example of an organization that may get misleading results due to lots of warnings from automatic moderation services. Another customer works with imagery related to surgery, they too may also get an overwhelming amount of over-cautious warnings which will be triggered as they relate to bloodshed or violence.

A clearer way to moderate content is Picturepark; it does not provide “this content is inappropriate” warnings but instead works with a smart list of topics and terms which are considered inappropriate for your organization, managed by you.

How to Moderate Content in Picturepark?

With Picturepark, you control which terms in smart lists are inappropriate, offensive, or suggestive. An AI-powered auto-tagging engine will provide you with a list of possible terms for your content, Picturepark will then intelligently compare them with your own list of flagged terms and provide you with an overview of where there is a match. Picturepark automation can additionally remove access to specific users, based on terms which you have specifically marked as inappropriate.

This way AI and automation augment humans in the content creation process. Delivering you the best results, especially for all the nuanced cases in brand compliance.

Automated Content Moderation in Picturepark

Test Content Moderation in Picturepark now.

Get in touch

Let's have a look at the required configuration in Picturepark:

  • We easily create and manage a smart list of sensitive terms for your content.
  • We configure AI tagging to automatically scan and flag sensitive content.
  • We decide who can and cannot see content that is flagged as sensitive.

Step 1: Smart list of sensitive terms

Picturepark comes with a powerful controlled vocabulary developed by Picturepark, Picturepark customers, and taxonomy experts. This vocabulary contains more than 30’000 terms which are classified into concepts and enriched with broader definitions. This is accessible instantly and all you need to get this is to open your Picturepark smart list entitled “Keywords”.

Based on those keywords you can build up your own smart list of sensitive terms that are inappropriate, offensive, and may even harm your brand’s reputation. All you need to do is add the required terms and categories that make sense for your brand. They can always be updated at a later date; which is important as your brand, customers, and what society finds acceptable will undoubtedly change over time.

Using this method of content moderation makes sense as it recognizes that different organizations have different ideas about what kind of content is sensitive and thus should be flagged as potentially inappropriate. To use an example, a green energy producer may add a category for pollution to avoid any images that contain litter, garbage, or smog.

However, these concerns may be different for other types of organizations. privacy concerns, especially those related to GDPR, may be important for some which work with visuals containing people: ensuring that all permissions to use said content are in place. To use a more recent example, relating to the Coronavirus pandemic, an organization may want all group images to only feature people practicing sufficient levels of social distancing wearing masks.

Using Picturepark, it is also possible to add instructions and reasoning per term to make it clear for users why content containing such terms is potentially not appropriate. This way you can train and educate your users: strengthening brand awareness and brand compliance among them.

Step 2: Configure AI Tagging

The configuration of AI tagging requires a Picturepark Business Rule, which allows for specific actions to be automatically carried out when a circumstance is met. A Business Rule can easily be configured to automatically scan and tag all newly created content using the Clarifai Picturepark Connector. This allows for automated content tagging.

Using the provided tags from Clarifai and matching it against your controlled vocabulary, gives you real content intelligence-based specifically on your business and brand; instead of delivering low-quality generic keywords.

Step 3: Scan and Flag Sensitive Content

Another Business Rule in Picturepark can easily be set-up to monitor the matches from AI tagging, against your smart list of flagged terms for sensitive content. The Business Rule can then automatically add any sensitive terms which it has detected to a dedicated text field.

You can help your users to understand why a term is inappropriate by giving them an explanation about why content has been flagged and instructions about how it can be used.

This way your users can clearly see the flagged terms when they are reviewing content. Additionally, they can also click on the terms to see your instructions and explanations about a flagged term. The more you add, the more context your users will see.

Step 4: Who Can and Cannot See Content

Once you have collected the sensitive terms, it is now possible to use yet another Business Rule to apply “Managed by Admin” permissions which means the content is made automatically unavailable to those users whom you have specified.

Configure Content Moderation Business Rule

Step 5: See it live in action

Have your smart list of sensitive terms ready and up to date and let the powerful Picturepark Business Rule Engine do the magic.

See it for yourself by testing content moderation live in the Picturepark demo.

  1. Open https://picturepark.com/now
  2. Request your personal demo
  3. Update the smart list of flagged terms in your demo
  4. Upload your content
  5. Sit back and let Picturepark handle your content moderation on your terms (pun intended!)

Be the First to Learn.

Interested in getting notified about new blogs and other news from Picturepark? Follow us on Twitter, Linkedin or Facebook, and subscribe to our monthly newsletter.

Picturepark News

We'll send you a monthly update of what is happening with Picturepark and the Digital Asset and Content Management industry.