Twitter Experiments With Adding Bright Orange Fake News Flags To Tweets
top of page
  • Cheryl Tan

Twitter Experiments With Adding Bright Orange Fake News Flags To Tweets

Updated: Aug 19, 2021

Sometimes, it’s all too easy to believe whatever you read on the Internet. Fake news is rampant, but social media platforms have been taking steps to crack down on it while not impeding actual content.


Twitter’s latest attempt at combatting fake news has been leaked and it seems that tweets that contain information that’s factually inaccurate or misleading will have an orange label appended below the original tweet that describes it as “violating the Community Policy on Harmfully Misleading Information”.

Credit: Twitter

Twitter confirmed to NBC News that this demo is currently being trialled and sample images from the demo actually show a range of topics being flagged as fake news, including a tweet that has medical misinformation about COVID-19.


Not only will Twitter suppress the tweet’s reach, but they will also put correction notices by verified users and fact-checkers below the tweet to provide accurate information disputing the original tweet.


Regular users might also be able to contribute to this, with a “community reports” feature appearing in the demo which claims to be like Wikipedia.


It’s not the first time social media platforms have implemented features in an attempt to combat fake news. Twitter previously added an option to their report button to allow users to report tweets that provide misleading information about voting.


Instagram started to include labels on false or partially false images and videos, which state that the content has been checked by fact-checkers and was deemed to contain false information. To view the content, users have to tap on a button. If users want to share such content, a pop-up will appear that informs the user again and requires the user to select “share anyway” as an acknowledgement that they know the information inside might be false and their post will carry a label stating such.

Credit: Facebook

Facebook previously had something similar to what Twitter is implementing, a red flag icon next to false information, but they scrapped that feature in 2017 because “putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs”. Instead, Related Articles have been put below the false information to give people more context about the situation.


Recently, Facebook has been relying more on using humans in conjunction with AI algorithms to quickly address false content. With a pilot programme announced in December 2019, community reviewers work as researchers to submit an initial assessment and evidence to third-party fact-checkers on why a piece of content is false.


It’s clear that social media platforms are stepping up their game with regards to dealing with the increasing amount of fake news, and that’s definitely something we need in this time when rumours are running rampant.

As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at editorial@tech360.tv.  Your input will be important in ensuring that our articles are accurate for all of our readers.

bottom of page