Press "Enter" to skip to content

Twitter users to fact-check fake images

Twitter is testing Community Notes for media, a crowdsourced moderation system whereby users can fact check misleading or hoax images to prevent them from going viral and misinforming viewers.

There have been a few instances where an AI generated image has fooled the masses on social media. Just over a week ago a fake AI image showing an explosion near The Pentagon – the US Department of Defense HQ – went viral.

Source: CNN.

The image circulation began when the US stock markets opened, with respected Twitter accounts sharing it such as Russian state news, RT, as well as financial news site ZeroHedge. That morning the S&P 500 opened by about 0.3 per cent to a session low, briefly wiping billions off the stock market, before a quick recovery after the image was reported as a hoax.

A Bloomberg news wire article, republished by The Sydney Morning Herald and other outlets, claim it’s ‘possibly the first instance of an AI-generated image moving the market’.

There is a long list of ways that AI generated images can be misused, including market manipulation. It’s conceivable that some dodgy hedge fund managers could transmit fake images to impact the market, allowing them to benefit from the buying or selling opportunities that may arise.

Fake images could easily be created by anyone with basic Photoshop skills – but for some reason, AI opened folks minds.

In an effort to ‘create a better-informed world’, Twitter is inviting users to become Community Notes contributors. The feature has been available for written Tweets since last year, but is now being tested for images and other media.

‘We believe regular people can valuably contribute to identifying and adding helpful context to potentially misleading information,’ Twitter says. ‘Many of the internet’s existing collaborative sites thrive with the help of non-expert contributions — Wikipedia, for example — and, while it’s not a cure-all, research has shown the potential of incorporating crowdsourced based approaches as part of a broader toolkit to address misleading information on the internet.’

The way it works is that users write notes on images published by others. They may provide additional context to an image, or completely fact check a falsified picture.

‘Contributors can leave notes on any Tweet and if enough contributors from different points of view rate that note as helpful, the note will be publicly shown on a Tweet.’

Contributors also have an ‘impact score’, a rating which indicates how helpful – or unhelpful – they have been with their notes. Contributors need a score of 10 to be able to add notes for images.

Once a note becomes publicly available, all copies of the images published to Twitter will have the note attached.

‘It’s currently intended to err on the side of precision when matching images, which means it likely won’t match every image that looks like a match to you. We will work to tune this to expand coverage while avoiding erroneous matches.’

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Our Business Partners

Top