Stephen Ritter, Chief Technology Officer at Mitek, discusses how digital identities can play into the world of fake news and misinformation
In the past year, misinformation on social media platforms has spread like wildfire. From conspiracy theories linking 5G towers with COVID-19, to rumours spreading about the COVID-19 vaccine altering DNA, many are seeing the pandemic as not only a public health crisis, but a communications crisis too. The worrying reach of misinformation in recent years has pushed social media platforms, notably Twitter, to introduce new fact-checking tools to inform users of potentially misleading information.
The introduction of these fact-checking tools not only highlights the increasing role these platforms have on how we consume information, but it also calls into attention the role these platforms play in forming our digital identities.
Our digital identity is made up of the information about us that exists online, including the content that we post on social media. Whether what we post is deemed as credible and reliable will ultimately contribute to how people perceive us – online and in the real world.
Just as we associate a blue tick with a real person, these fact-checking tags are a symbol of trust and credibility. Anyone with these tags on their posts will likely see their reputation affected. It is therefore important to understand how these tools work, how they affect our digital identity and its reputation, and what can be done to ensure users on the platform are receiving accurate information from the accounts they follow and trust.
First, let’s manage misinformation
In recent years, many people are turning towards social media as a news source. In a study conducted last year by Ofcom, half of adults in the UK used social media to stay updated on the latest news. These findings were reported before the onset of COVID-19, and would likely have increased as the pandemic pushed people to spend more time online. That’s why it’s even more vital now to prevent the spread of false information on social media.
When thinking about how these social media platforms are rolling out fact-checking tools, it might not seem possible for the platforms to check every single post, especially with Twitter seeing more than 500 million tweets posted every day. But, luckily for them, the platforms don’t need to. For many users, published content will only reach their closest contacts, and much of this content consists of personal thoughts or opinions not meant to be shared with the wider population.
Blue-tick accounts: the super-spreaders
So, which accounts should these platforms be monitoring to ensure published content is factual? A good place to start would be to look at those aforementioned blue-tick, verified accounts and other influential users and prominent public figures like politicians and CEOs.
This is a good starting point for two reasons. Firstly, to get that blue tick, users often have to undergo additional checks to confirm they are indeed the person they claim to be. This means that the information they post online can be directly associated with their real-world identity and is not from a fake identity or account. Secondly, those who are verified often have large numbers of followers – by monitoring these accounts for factual accuracy, it will have a greater impact on managing the spread of misinformation, compared to monitoring posts from average users.
Establishing policies and guidelines on which accounts will be subjected to fact-checking and how would also be recommended. Sharing these guidelines will help the public understand better how these posts are reviewed and flagged as fact versus opinion, and enable them to judge for themselves the reputations of accounts that are repeatedly flagged for false information.
A joint effort is required
With the spread of misinformation showing no signs of slowing down, it has also encouraged the biggest platforms to put competition behind them, and come together with governments and researchers to tackle this growing issue. As vaccines are rolled out globally, it is particularly important that users are alerted to any misinformation around COVID-19 vaccines , so they are aware of the latest and most accurate information.
Together with FullFact, a fact-checking charity, Facebook, Twitter and YouTube will be launching a framework in January which will set out new standards for tackling misinformation. Making sure these guidelines are clear and transparent to users will be critical in ensuring they know what is considered fact and what is not.
Misinformation beyond social issues
Looking forward, we shouldn’t be surprised if misinformation starts to spread beyond elections and public health, and into corporate issues too, affecting company and product information. Unfortunately, anyone looking to spread false information on a competitor now has a playbook of successful tactics to damage the reputation of another group, draw more attention to themselves or muddy the waters on a sensitive topic.
In order to counteract this misinformation and build trust with customers, businesses will need to develop clear communications guidelines when sharing or correcting information, and provide more visibility into their practices. When the line between fact and fiction blurs, trust will play a crucial role in influencing consumers’ loyalty and purchasing decisions.
Now, as more people rely on social media for information than ever before, it is vital that social media giants, along with public authorities, are fulfilling their duty and remain responsible for ensuring the public are informed. Factual information has a huge role to play in the public’s safety and security. If our digital identities are at risk of being tainted with inaccuracy, it won’t only be our digital identity at risk – but our real-world image too.