Episode 158 – Misinformation, An Exploration of Its Tail Risks With Stephen Lewandowsky

The way in which we find information has changed over the last few decades, and with this change, we have foregone much of the reliability of the information we immerse ourselves in.  

Misinformation, often referred to as false or inaccurate information, has become a prevalent issue in the digital age. With the rapid growth of social media platforms and the easy access to information online, misinformation can spread quickly and widely, influencing public opinion, shaping beliefs, and even impacting important societal decisions. Misinformation can take various forms, including fabricated news stories, manipulated images or videos, misleading statistics, and deceptive narratives. It can originate from various sources, including individuals, organisations, or even state actors with specific agendas. The consequences of misinformation can be far-reaching, leading to confusion, mistrust, polarisation, and sometimes even harm to individuals or communities. As such, combating misinformation has become a significant challenge for governments, tech companies, media organisations, and individuals alike, requiring a multi-faceted approach that involves fact-checking, media literacy education, and responsible online behaviour.  

Misinformation is not just an issue for the here and now; in its current form, the actions taken as a result of trust in misinformation, or public disinformation campaigns can pose significant risks to society and the political landscape as we know it. To help us unpack these risks, we are privileged to be joined by Professor Stephan Lewandowsky. 

Professor Stephan Lewandowsky is a cognitive scientist at the University of Bristol whose main interest is in the pressure points between the architecture of online information technologies and human cognition, and the consequences for democracy that arise from those pressure points. 

His research examines the consequences of the clash between social media architectures and human cognition, for example by researching countermeasures to the persistence of misinformation and spread of “fake news” in society, including conspiracy theories, and how platform algorithms may contribute to the prevalence of misinformation.  

He is also interested in the variables that determine whether or not people accept scientific evidence, for example surrounding vaccinations or climate science.His research is currently funded by the European Research Council, the EU’s Horizon 2020 programme, the UK research agency (UKRI, through Centre of Excellence REPHRAIN), the Volkswagen Foundation, the John Templeton Foundation (via Wake Forest University’s “Honesty Project”), Google’s Jigsaw, and by the Social Sciences Research Council (SSRC) Mercury Project. 

Leave a Reply

Your email address will not be published. Required fields are marked *