Misinformation risks – How big are they?
Misinformation on social media spreads further, faster, and deeper than truthful information, and this rapid spread can affect everyone online and offline. For example, misinformation can lead to broad negative consequences, including distortions of electoral processes, incitements to violence, and the fuelling of conspiracy theories, as well as presenting a risk to personal and public health. There is also widespread public concern over mis- and disinformation, particularly online, with social media identified as the platform where people are most concerned about coming across false or misleading information.
In recognition of these risks and increasing concerns, many are now working to better understand our online information environments, including the IRIS Academic Research Group. For understanding misinformation specifically, put simply, it’s a work in progress. Defining the exact nature and measure of its influence and potential impact is a real challenge with many questions still yet to be answered.
Real-world impact of misinformation via social messaging apps
(BBC News, 19 July 2018) Handikera, Karnataka, India
I The car had overturned when the mob began pelting it with stones (Source ref)
“A 32-year-old Indian software engineer has become the latest victim in a spate of mob lynchings, allegedly spurred by child abduction rumours spreading over WhatsApp…
“They kept hitting us, demanding to know how many children we had kidnapped,” says Mohammad Salman, who is still in shock, his body bruised and his face scarred with stitches.
On 13 July, Mr Salman, 22, and his two friends were brutally beaten by a mob that suspected them of kidnapping children. The last thing he remembers seeing was his friend, Mohammad Azam, being dragged away with a noose around his neck. Mr Azam died from his injuries.
“At least 17 others have allegedly been killed in India over child kidnapping rumours since April 2018. In all of these cases, police say, the rumours were spread via messages on WhatsApp…
Investigations into these incidents have shown that people often forwarded false messages or doctored videos to large groups, some with more than 100 participants. And violent mobs would gather quickly and attack strangers, leaving police little time to respond.”
Social media and messaging apps
As part of our information ecosystem, encrypted messaging apps such as Whatsapp and Facebook messenger, and more recently, Telegram and Signal – described in a New York Times article in 2021, as “the hottest apps in the world” – are used by millions. Much of their popularity stems from their ability to offer privacy and maintain the confidentiality of online conversations but, more precariously, they also create spaces where misinformation can be freely spread without safeguards to protect people from the harm it can cause.
To try to curb some of the observed issues around misinformation, some efforts are being made by the platforms themselves. For example, in 2018, after a spate of mob killings in India like the story above, Whatsapp set new rules on message forwarding to fight against misinformation. These new limits were then expanded to all app members in 2019, with further limitations set in 2020 in response to fake news about COVID-19. These are positive steps, but with competing voices pitching ever louder about technology business models, legal rights, and lines of accountability, and resolutions yet to be achieved, more needs to be done.
User-led crowdsourcing approach
Crowdsourcing, a method that sits in-between traditional research methods, such as surveys, and new approaches to collect and analyse data, like social listening, carries the potential to allow researchers to access misinformation circulating in the most hard-to-reach environments – private chats – and track the proliferation and spread of misinformation narratives.
Crowdsourcing has several attractive features: first, it functions as a type of participative online activity for heterogeneous individuals, groups or organisations to engage with in a flexible and voluntary way; second, when it is based on the use of mobile phones (“Location-aware mobile crowdsourcing”), it offers great potential for collecting content directly from users in real-time; and third – also the focus of our research here at LSHTM – it could be applied as an early warning system that helps to detect new, emerging rumours and false messages before they go viral.
Digital research at LSHTM
As part of an effort to innovate new methodologies and create healthier information ecosystems, IRIS Academic researchers at LSHTM are working with a crowdsourcing app to investigate whether it is possible to successfully engage users to share examples of information they see in their private messaging chats. As part of the study, users are asked about selected topics (e.g., COVID-19, climate change, democratic values) via the crowdsourcing app, and the examples they share are then used to activate a system of social listening.
Central to the potential of this idea is that “the crowd” interact using a specialised mobile app that caters to rapid data acquisition where they can instantly share (mis)information narratives they have seen or exchanged in private chats. It is hoped that this experimental methodology might provide a scalable and ethically justified way to identify and track the spread of misinformation circulating within more private online environments.