Skip to content

What is Content Moderation? And What It Has To Do With Section 230

Young black woman reading a problematic text message on her mobile phone while working at home.

The advent of online forums and social media platforms has led to an explosion of user-generated content (UGC). As a result, the question "What is content moderation?" is increasingly important for platforms that allow UGC. Read further to understand what content moderation is, and the role of Section 230 in protecting social media platforms from controversial UGC.

What is Content Moderation?

The term user-generated content refers to any form of content — text, videos, images, reviews, etc. — created by users of an online platform. Unlike professionally produced content, UGC is often raw, unfiltered, and reflective of the views, opinions, creativity, and experiences of the users. As such, this content can also contain messages that espouse violence, bigotry, hate, or are otherwise controversial or untrue narratives.

Content moderation is a crucial risk mitigation practice employed by online platforms to screen, monitor, and control UGC. This process ensures that the content aligns with the platform's policies, community guidelines, and applicable legal regulations. It’s a balancing act between allowing freedom of expression and maintaining a safe, respectful online environment.

The approach to content moderation varies widely. Some platforms actively search for violative UGC while others rely on other users to flag inappropriate content. Mainstream platforms with robust advertising infrastructure tend to have stricter UGC guidelines to better control the user experience. Conversely, fringe or niche platforms and forums are often more permissive in the UGC they allow and lax in their moderation.

Section 230’s Role In Protecting Social Media Platforms

Section 230, a provision of the Communications Decency Act of 1996, is a critical piece of internet legislation in the United States. Broadly speaking, it provides immunity to online platforms from liability for content posted by their users. In simple terms, if a user posts something illegal or harmful on a platform, Section 230 generally shields the platform from being held legally responsible for that content.

This immunity has been a cornerstone for the growth of the internet, allowing social media platforms, blogs, news websites, and forums to host user discussions without fear of legal repercussions. But some legislators think it has also allowed harmful content to thrive.

The Potential Impact of the Safe TECH Act on Social Media Platforms

The Safe TECH Act, introduced in 2023, aims to reform Section 230. If passed, it could significantly change the landscape of content moderation and the legal responsibilities of online platforms.

This Act proposes platforms be held liable for paid content that breaks laws. If passed, platforms may no longer be protected from injunctions requiring the removal of certain content. The Act also aims to ensure that civil rights laws, wrongful death actions, and laws against stalking, harassment, and discrimination apply fully to online platforms, irrespective of Section 230.

This proposed amendment to Section 230 could have significant implications for social media platforms. If passed, platforms would become legally responsible for certain types of user content in ways they haven't been before. This would make content moderation more important than ever in detecting and removing harmful, illegal, or offensive content promptly and thoroughly.

Grow Your Business, Reduce Risk, and Safeguard Your Brand With LegitScript

Are you an internet platform or a social media company that could potentially be impacted by proposed amendments to Section 230? Download Section 230 — Proposed Changes that Will Expose Social Media and Internet Platforms to New Risk and you’ll uncover:

  • How Section 230 could potentially be amended by the SAFE TECH ACT
  • And how a Platform Monitoring solution could protect your brand if the SAFE TECH Act passes.

Smelting words into subject matter expertise since 2020, Thea Le Fevre specializes in B2B SaaS Content Marketing. She believes in embracing innovation and produces AI-assisted content along with organically crafted content. Take a deep dive into her work for up-to-date industry news surrounding issues in trust & safety, payments risk & compliance, healthcare, and more.

Recent Blog Articles

Addiction Treatment Advisory Committee

Harnessing Collaboration: Highlights from LegitScript’s First Advisory Committee Meeting of the Year

Last year, LegitScript set out to relaunch its Addiction Treatment Certification Advisory Committee in order to strengthen avenues of communication and ensure the continued improvement of its Certification solutions. On March 27, 2024, the revitalized committee convened for the first time. Keep read...

Emerging Threat: Melanotan, Repackaging, and Online Sales

Despite continued regulatory action and potentially dangerous health effects, a social media trend promoting Melanotan II - an illicit tanning drug - continues to prevail in the market. LegitScript has observed that in an attempt to skirt account termination and regulatory scrutiny, merchants common...

Problematic Product Spotlight: Tianeptine

LegitScript has observed a recent rise in both the popularity and regulatory scrutiny of tianeptine, an unregulated substance commonly sold in gas stations and online - and often marketed with unproven claims to be a cognitive enhancer as well as an alternative to opioids. As such, manufacturers hav...
CBD Hemp oil, Hand holding droplet of Cannabis oil against Marijuana buds. Alternative Medicine

What You Need to Know About CBD Product Labeling Regulations

From CBD product labels that claim to diagnose, cure, mitigate, treat, or prevent disease to truth-in-advertising requirements - LegitScript explores the regulatory environment of CBD as it exists today. Then download our CBD Compliance Resource Guide or create an account to apply for your CBD Certi...