Blog 4: Assessing Instagram and YouTube’s Efforts to Curb Misinformation
In this blog, I am going to assess two platforms, Instagram and YouTube, for their attempts to curb misinformation. Platforms like Instagram and YouTube are not just used for entertainment but also for consuming news and information, which makes them important in the spread of misinformation. This blog analyzes both platforms using a three-step approach:
First, explaining what each platform claims to do to control misinformation. Second, looking at how their policies are changing or improving. Third, evaluating their limitations and whether these efforts actually work based on research and my personal experience.
This blog also looks at how these policies and procedures are used, evaluates their effectiveness, and provides suggestions for improvement. This approach helps break down not just what these platforms say, but how effective they really are.
Image source: Amnesty International
There are strong criticisms of this approach, especially from organizations like Amnesty International, which argue that reducing moderation and relying more on users can increase the risk of harmful content spreading.
Their research highlights past cases where weak moderation and algorithmic amplification contributed to real-world harm, such as the Rohingya crisis in Myanmar, where Meta’s platform amplified hateful and misleading content that contributed to violence. This concern becomes more relevant as Meta shifts away from fact-checking and toward Community Notes, which may not be actively used by most users.
Based on my own experience, Instagram often shows emotional or aspirational content that can sometimes be misleading, even if it is not completely false.
I also think that warning labels are not very effective because they are easy to ignore, and most users do not take the time to analyze them. Overall, while Instagram’s policies exist, their effectiveness is limited because user behavior and platform design do not always support those efforts.
Image Source: The Guardian
YouTube takes a somewhat stricter approach to misinformation compared to Instagram. In its policies, the platform states that it removes content that poses a serious risk of harm, especially in areas like health misinformation, election interference, and manipulated media. It uses a strike system where creators receive warnings for violations, and repeated offenses can lead to the termination of an entire channel.
I believe this strict approach is necessary, especially at a time when a large portion of media consumption has shifted to YouTube. In addition to this, YouTube has also created more detailed frameworks for specific types of misinformation. For example, in medical content, it categorizes misinformation into prevention, treatment, and denial. This shows that the platform is trying to create more structured and targeted policies for different types of harmful content. Overall, these rules suggest that YouTube focuses more on direct enforcement by removing harmful content rather than simply reducing its visibility.
In addition to its existing policies, YouTube has also worked on improving how it handles misinformation, especially in the area of health. According to The Hill, the platform has introduced clearer frameworks based on guidance from health authorities and has removed large amounts of harmful content, including videos promoting dangerous medical treatments or false claims about diseases, such as unproven cures. These updates show that YouTube is actively trying to improve its system and respond to real world risks of misinformation.
Despite these efforts, YouTube still faces significant criticism. According to The New York Times, research has shown that the platform has “blind spots,” especially in short-form content like YouTube Shorts and in non-English videos, where misinformation can spread more easily.
In addition, reporting from The Guardian suggests that YouTube’s systems still allow “borderline” misinformation to spread, even if it does not fully violate platform policies. Even though YouTube removes harmful content, its recommendation system often pushes engaging or controversial videos that may not be fully accurate.
Image source: Youtube Official Blog
From my personal experience, YouTube feels more flooded with random and sometimes misleading content compared to Instagram. While it may be stricter in terms of rules, the overall user experience still exposes viewers to a large amount of questionable content, which suggests that enforcement and a strict policy focus on the strike system alone is not enough.
Comparing both platforms:
When comparing both platforms, it becomes clear that they take different approaches but face similar challenges. Instagram focuses more on reducing the spread of misinformation through labels and visibility control, while YouTube relies more on removing harmful content and enforcing rules through strikes. However, both platforms are highly optimized for engagement, which means they tend to promote content that keeps users interested, even if that content is emotional or slightly misleading.
Recommendation:
In my opinion to improve these systems, both Instagram and YouTube need to address the gap between policy and actual user behavior. Research shows that even with strict enforcement, misinformation continues to spread due to platform design. According to The New York Times, YouTube still has “blind spots” in areas like short-form content and non-English videos, while reporting from The Guardian suggests that “borderline” misinformation can still be recommended by the algorithm.
Similarly, changes at Meta as reported by another article from The New York Times show a shift away from fact-checking toward Community Notes which may reduce active moderation. Criticism from Amnesty Internationalfurther highlights how weaker moderation and engagement-driven systems can amplify harmful content and even contribute to real world consequences.
Based on these findings, improving YouTube should focus on enhancing content quality and increasing user awareness about misinformation, such as integrating reliable sources more clearly within videos and providing stronger context. For Instagram, the platform should strengthen its moderation system instead of relying heavily on Community Notes. Bringing back more structured fact-checking and implementing stronger detection systems could help identify misinformation earlier and reduce its spread more effectively. These recommendations directly address the limitations identified in both research and my personal experience.
Overall both Instagram and YouTube have taken steps to address misinformation, but their effectiveness remains limited. While YouTube relies on strict enforcement and Instagram focuses on reducing visibility, both continue to struggle due to engagement driven systems. As a result, misinformation is not eliminated but managed at different levels. Addressing this issue will require not only stronger policies but also changes in how content is recommended.