How to regulate climate misinformation on platforms

Explore top LinkedIn content from expert professionals.

Summary

Regulating climate misinformation on digital platforms means putting systems in place to identify, reduce, and counter false or misleading information about climate change, so that people can make decisions based on accurate facts. With the rise of strategic disinformation, especially online, tackling the spread of misleading climate narratives is crucial to support honest climate action and public trust.

  • Promote transparency: Advocate for clear labeling of sponsored content and AI-generated material so users can easily spot climate misinformation.
  • Teach media literacy: Encourage platforms and schools to educate people on spotting false climate claims, boosting critical thinking about what they see online.
  • Build collaborations: Support coordinated efforts between governments, businesses, and community groups to share resources and strategies for stopping climate disinformation.
Summarized by AI based on LinkedIn member posts
  • View profile for Roberta Boscolo
    Roberta Boscolo Roberta Boscolo is an Influencer

    Climate & Energy Leader at WMO | Earthshot Prize Advisor | Board Member | Climate Risks & Energy Transition Expert

    165,714 followers

    🌍 Ten Years After Paris: is the Climate Crisis a Disinformation Crisis? In 2015, the world made a historic promise: to keep global warming well below 2°C, and ideally below 1.5°C. We committed to major emission cuts by 2030, and net-zero by 2050. The Paris Agreement marked a new era of global climate cooperation. But ten years on, we're still struggling with cooperation while the World Meteorological Organization tells us that the Earth’s average temperature exceeded 1.5°C over a 12-month period (Feb 2023–Jan 2024) for the first time. Why? 🔍 A groundbreaking new study, led by 14 researchers for the International Panel on the Information Environment, reviewed 300 studies from 2015–2025. The findings are alarming: powerful interests – fossil fuel companies, populist parties, even some governments – are systematically spreading misleading narratives to delay climate action. 🧠 Misinformation isn't just about denying climate change. It’s now about strategic skepticism – minimizing the threat, casting doubt on science-based solutions, and greenwashing unsustainable practices. 📺 This disinformation flows through social media, news outlets, corporate reports, and even policy briefings. It targets all of us – but especially policymakers, where it can shape laws and delay critical decisions. 💡 So what can we do? 1️⃣ Legislate for transparency and integrity in climate communication. 2️⃣ Hold greenwashers accountable through legal action. 3️⃣ Build global coalitions of civil society, science, and public institutions. 4️⃣ Invest in climate and media literacy for both citizens and leaders. 5️⃣ Amplify voices from underrepresented regions – like Africa – where more research is urgently needed. We must protect not only the planet’s climate, but the integrity of climate information. 🔗 Read more on how disinformation is undermining climate progress – and what we can do about it: https://lnkd.in/eDN9hKAJ 🕰️ The window is small. But with truth, science, and collective action, we can still turn the tide.

  • View profile for Charles Cozette

    CSO @ CarbonRisk Intelligence

    8,419 followers

    Strategic inoculation messages can effectively combat climate disinformation campaigns from fossil fuel companies, providing a defense against misleading advertising. Researchers conducted an online experiment with 1,045 U.S. participants testing how effectively ExxonMobil's "Future of Energy" native advertisement influenced climate beliefs. The study divided participants into five conditions, a control group, groups exposed to pre-bunking messages from U.N. Secretary-General Antonio Guterres, and groups shown social media posts featuring the ad with or without advertising disclosures. Results demonstrated that exposure to the ExxonMobil ad significantly increased belief in misleading claims about the company's environmental commitment. While the advertisement effectively influenced participant beliefs, the study found that disclosures helped participants recognize the content as advertising. More importantly, inoculation messages substantially reduced susceptibility to the misleading claims, though not eliminating the ad's influence. Notably, the inoculation approach didn't make participants more skeptical of accurate climate information, suggesting a targeted effectiveness against disinformation rather than increased general skepticism. By Michelle Amazeen, Benjamin Sovacool, Arunima Krishna, Ramit Debnath, and Chris Wells.

  • View profile for Hendrik Bruns

    Policy Analyst at the European Commission Joint Research Centre

    1,582 followers

    How the European Commission can use behavioural insights to combat misinformation 🔬 What have the #COVID pandemic and the ongoing climate crisis taught us?  That on fertile ground, mis- and dis-information can spread like wildfire. This is why here, at the European Commission we are leveraging behavioral insights to combat misinformation💡. We are presenting findings from a #EUPolicyLab experiment with 5000+ participants across four EU countries today at the JRC Disinformation Workshop in Brussels. We identified two effective approaches:   👉 debunking (exposing falsehoods and explaining why they are false) and   👉 prebunking (training individuals to recognize misinformation beforehand).✅    Here's what you need to know 🔑:  1. Debunking and prebunking WORK when combatting misinformation about climate change and COVID-19.   2. These interventions WORK when delivered by a credible source, such as the European Commission.  3. The level of trust in EU institutions of the participants influenced the effectiveness of our interventions in two cases.     Debunking and prebunking are not a silver bullet, of course and media literacy will continue to play a major role in the fight against misinformation, but their usefulness in risk communication and public policy cannot be overstated.🌐    You can find out more here: https://europa.eu/!yyPwGJ     #misinformation #disinformation #behavioralscience #EuropeanCommission #behaviouralinsights

  • View profile for Julio Romo

    Advisor to businesses, governments and investors - CVCs, family offices - on strategy, strategic communications, reputation and innovation positioning.

    3,286 followers

    🔎 𝗛𝗼𝘂𝘀𝗲 𝗼𝗳 𝗟𝗼𝗿𝗱𝘀 𝗖𝗼𝗺𝗺𝗶𝘁𝘁𝗲𝗲 𝗼𝗻 𝗠𝗲𝗱𝗶𝗮 𝗟𝗶𝘁𝗲𝗿𝗮𝗰𝘆 𝗔𝗻𝗱 𝗠𝗶𝘀𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 🔎 Artificial intelligence (AI) is reshaping how information circulates online, posing fresh risks to trust, reputation, and social harmony. In a recent UK House of Lords session, Dr Mhairi Aitken from the Alan Turing Institute and Professor Sander van der Linden from the University of Cambridge presented urgent evidence on the scale of this threat and the need for stronger safeguards. I watched the committee hearing, and below is a summary of insight shared by both Mhairi and Sander, which I have written in the #ReputationMatters article. Three Main Concerns: 1️⃣ 𝗔𝗜-𝗗𝗿𝗶𝘃𝗲𝗻 𝗠𝗶𝘀𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻: Generative AI can fabricate images, videos, and text at high speed—making it harder than ever to verify authenticity. 2️⃣ 𝗘𝗿𝗼𝘀𝗶𝗼𝗻 𝗼𝗳 𝗧𝗿𝘂𝘀𝘁: Constant exposure to manipulated content fosters “over-scepticism,” where people doubt all online material. 3️⃣ 𝗣𝗼𝗹𝗶𝗰𝘆 𝗚𝗮𝗽𝘀: Although the Online Safety Act addresses certain online harms, it does not fully empower regulators like Ofcom to tackle misinformation and disinformation. Weak enforcement allows harmful content to circulate unchecked. Three Key Recommendations: ✅ 𝗦𝘁𝗿𝗲𝗻𝗴𝘁𝗵𝗲𝗻 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗶𝗼𝗻: The witnesses called for an expanded remit so regulators can ensure platforms label AI-generated content and prevent large-scale manipulation. ✅ 𝗘𝗺𝗯𝗲𝗱 𝗠𝗲𝗱𝗶𝗮 𝗟𝗶𝘁𝗲𝗿𝗮𝗰𝘆: Both experts urged more robust “prebunking” lessons in schools and adult education. By repeatedly teaching critical thinking skills, individuals build long-term resilience to manipulated content. ✅ 𝗣𝗿𝗼𝗺𝗼𝘁𝗲 𝗖𝗿𝗼𝘀𝘀-𝗦𝗲𝗰𝘁𝗼𝗿 𝗖𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝗼𝗻: Government agencies, businesses, and community organisations must coordinate and work in a more collaborative manner. Professor van der Linden, said: “40% 𝘰𝘧 𝘱𝘦𝘰𝘱𝘭𝘦 𝘴𝘢𝘺 𝘵𝘩𝘢𝘵 𝘪𝘯 𝘵𝘩𝘦 𝘱𝘳𝘦𝘤𝘦𝘥𝘪𝘯𝘨 𝘮𝘰𝘯𝘵𝘩 𝘵𝘩𝘦𝘺’𝘷𝘦 𝘴𝘦𝘦𝘯 𝘮𝘪𝘴𝘪𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯 𝘪𝘯 𝘵𝘩𝘦 𝘜𝘒, 90% 𝘴𝘢𝘺 𝘵𝘩𝘢𝘵 𝘵𝘩𝘦𝘺’𝘳𝘦 𝘷𝘦𝘳𝘺 𝘤𝘰𝘯𝘤𝘦𝘳𝘯𝘦𝘥 𝘢𝘣𝘰𝘶𝘵 𝘵𝘩𝘦 𝘪𝘮𝘱𝘢𝘤𝘵𝘴 𝘰𝘧 𝘮𝘪𝘴𝘪𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯, 𝘢𝘯𝘥 𝘢𝘣𝘰𝘶𝘵 20% 𝘴𝘢𝘺 𝘵𝘩𝘢𝘵 𝘵𝘩𝘦𝘺’𝘷𝘦 𝘴𝘦𝘦𝘯 𝘥𝘦𝘦𝘱𝘧𝘢𝘬𝘦𝘴.” Why It Matters 💡 Leaders in government and business face urgent pressures as public distrust threatens everything from health advice to corporate reputations. Taking decisive action—through regulation, education, and collaboration—can protect both institutions and individuals. Read the full LinkedIn Article for an in-depth analysis. 👉 Comment, share, connect and/or subscribe for more insights on how you and your organisation can safeguard the trust, perception, and reputation that is loaned to you. The article was originally published on my Twofourseven Strategy blog: https://lnkd.in/eQBCaZMJ

  • View profile for Jay Van Bavel, PhD

    Psychology Professor | Book Author | Keynote Speaker

    29,303 followers

    95% of Americans identified misinformation as a problem when they’re trying to access important information Unfortunately, social media platforms have struggled to stem the tide of falsehoods and conspiracy theories around the globe. The existing content moderation model often falls short—failing to correct misinformation until it has already gone viral. To combat this problem, we need systemic changes in social media infrastructure that can effectively thwart misinformation. In a new set of experiments in the US and the UK, we developed and tested an identity-based intervention: the Misleading count. This approach leveraged the fact that misinformation is usually embedded in a social environment with visible social engagement metrics. We simply added a Misleading Count button next to the Like count which reported the number of people who tagged a social media post as misleading. This intervention was designed to reveal a simple social norm—that people like you found the post misleading. We found that the Misleading Count reduced people’s likelihood of sharing misinformation by 25%. Moreover, it was especially effective when these judgments came from in-group members. You can read more in our latest lab newsletter: https://lnkd.in/eRKMgT47

Explore categories