Addressing Threats to Democracy on Social Media

Explore top LinkedIn content from expert professionals.

Summary

Addressing threats to democracy on social media means confronting the spread of misinformation, manipulation by artificial intelligence, and digital tactics that undermine trust in elections and institutions. The goal is to protect democratic values and open dialogue by promoting transparency, accountability, and informed public participation online.

  • Promote media literacy: Encourage education that helps people recognize false information and understand how digital manipulation works.
  • Strengthen accountability measures: Advocate for policies that hold tech platforms and AI companies responsible for harmful content and deceptive practices.
  • Support transparent content practices: Recommend the use of watermarking and clear disclosure for AI-generated materials to help users distinguish between authentic and synthetic content.
Summarized by AI based on LinkedIn member posts
  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    MIT AI Risk Initiative | MIT FutureTech

    64,872 followers

    "Disinformation campaigns aimed at undermining electoral integrity are expected to play an ever larger role in elections due to the increased availability of generative artificial intelligence (AI) tools that can produce high-quality synthetic text, audio, images and videos and their potential for targeted personalization. As these campaigns become more sophisticated and manipulative, the foreseeable consequence is further erosion of trust in institutions and heightened disintegration of civic integrity, jeopardizing a host of human rights, including electoral rights and the right to freedom of thought. → These developments are occurring at a time when the companies that create the fabric of digital society should be investing heavily in, but instead are dismantling, the “integrity” or “trust and safety” teams that counter these threats. Policy makers must hold AI companies liable for the harms caused or facilitated by their products that could have been reasonably foreseen. They should act quickly to ban using AI to impersonate real people or organizations, and require the use of watermarking or other provenance tools to allow people to differentiate between AI-generated and authentic content." By David Evan Harris and Aaron Shull of the Centre for International Governance Innovation (CIGI).

  • View profile for Magdalena Skipper

    Editor In Chief at Nature

    10,001 followers

    Online misinformation is frequently highlighted as a blight that threatens to undermine the fabric of society, polarize opinions and even destabilize elections. In the latest issue of Nature, a collection of articles probe the scourge of misinformation and try to assess the real risks. In one research paper, David Lazer and colleagues examine the effects of Twitter deplatforming 70,000 traffickers of misinformation in the wake of violent scenes at the US Capitol in January 2021. In a second paper, Wajeeha Ahmad and co-workers explore the relationship between advertising revenue and misinformation. A Comment article by Ullrich Ecker and colleagues discusses the risks posed by misinformation to democracy and elections, and an accompanying Comment article by Kiran Garimella and Simon Chauchard assesses the prevalence of AI-generated misinformation in India. And David Rothschild and colleagues put the harms of misinformation into perspective, highlighting common misperceptions that exaggerate its threat and suggesting steps to improve evaluation of both the effects of misinformation and the efforts made to combat it. In our accompanying editorial we call for more data availability for researchers and greater transparency from online platforms https://lnkd.in/eWi6f_qt Nature Portfolio

  • View profile for Jean-Christophe Conticello

    Founder Giants

    18,975 followers

    🔍 Deepfakes and Digital Sovereignty: A European Imperative 🇪🇺 The recent viral spread of a deepfake video falsely depicting U.S. Vice President Kamala Harris is a stark reminder of the dangers we face in the digital age. This video, generated using Grok XAI, was shared by Elon Musk, CEO of X (formerly Twitter), on his platform. The incident underscores how fragile truth can become when powerful technologies fall into the wrong hands, and it highlights a critical issue for Europe: the need to protect our digital sovereignty. As Europe continues to champion democracy, human rights, and the rule of law, we must recognize that our sovereignty is increasingly being challenged not just by traditional geopolitical threats but by digital manipulations that can disrupt our societies from within. The threat posed by deepfakes is just one example of how external actors can wield technology to influence public opinion and electoral outcomes. 🌍 Why European Sovereignty Matters Digital and informational sovereignty are crucial for maintaining our independence and protecting our democratic processes. The European Union has made strides in data protection with initiatives like the General Data Protection Regulation (GDPR), but the challenge of deepfakes demands even more comprehensive action. 💡 What Europe Must Do 1. 💻 Invest in AI and Deepfake Detection: Europe needs to lead in developing technologies that can identify and counteract deepfakes. This is not just about protecting our citizens; it’s about ensuring that our democracies cannot be easily manipulated. 2. 📜 Regulate Social Media Platforms: When the CEO of a major social media platform is playing with misinformation and fake news, it’s clear that stricter regulations are necessary. Platforms operating within Europe must be held accountable for the content they host, especially when it comes to preventing the spread of misinformation. 3. 🎓 Promote Media Literacy: Educating our citizens to recognize and resist digital manipulation is essential. A well-informed public is our best defense against the erosion of truth. 4. 🤝 Enhance Cybersecurity Collaboration: European nations need a unified approach to cybersecurity, including combating digital disinformation. This collaboration is key to protecting our shared values. 5. 🔧 Control Digital Infrastructure: Reducing our reliance on non-European digital infrastructure will help safeguard our sovereignty. The Kamala Harris deepfake incident is a global wake-up call, and Europe must respond decisively. By strengthening our digital sovereignty, we can protect our democracies and ensure that our voices, not those of external actors, shape our future. 👉 What steps do you think Europe should take to strengthen its digital sovereignty? Share your thoughts in the comments! #DigitalSovereignty #CyberSecurity #Deepfakes #EU #Innovation #TechForGood #DataProtection #AI #SocialMediaRegulation

  • View profile for Raquel Vazquez Llorente

    AI Policy & Governance♦️Bridging Socio-Technical Safety & Regulation♦️Before: Deepfakes, Provenance & Digital Evidence in Crises + Armed Conflicts♦️Human Rights Lawyer

    4,208 followers

    This year, through conversations with journalists, activists, creators, policy-designers and technologists about #deepfakes and #generativeAI, we've gained profound insights at WITNESS. In my latest piece for the Council on Foreign Relations, I share learnings based on our work with communities defending democracy at the frontlines, and I outline ways to safeguard #2024elections worldwide. 💡Bonus track 💡 ➡ Examples of how, with appropriate disclosure, generative AI can be used positively in the context of #elections. [Full text on link] The effects of synthetic media on #democracy are a mix of new, old, and borrowed challenges: 🆕 Inconvenient truths can be denied as deepfaked. The burden of proof, or perhaps more accurately, the “burden of truth” has shifted onto those circulating authentic content and holding the powerful to account. 🧓 AI deepens existing vulnerabilities, bringing a serious threat to principles of inclusivity and fairness that lie at the heart of democratic values. Non-consensual sexual deepfakes can have an additional chilling effect, eroding the diversity and representativeness that are essential for a healthy democracy. ♻ Much as with social media, where we failed to incorporate the voices of the global majority, we have borrowed previous mistakes. This highlights a crucial gap: the urgent need for a global perspective in AI governance, one that learns from the failures of social media in addressing cultural and political nuances across different societies. As two billion people around the world go to voting stations next year in fifty countries, there is a crucial question: how can we build resilience into our democracy in an era of audiovisual manipulation? A roadmap: 1️⃣ We must ensure that new AI regulations and companies’ policies are steeped in human rights law and principles, such as those enshrined in the Universal Declaration of Human Rights. In the coming years, one of the most important areas in socio-technical expertise will be the ability to translate human rights protections into AI policies and legislation. 2️⃣ We should really ask, is it technological progress if it is not inclusive, if it reproduces a disadvantage? Technological advancement that leaves people behind is not true progress; it is an illusion of progress that perpetuates inequality and systems of oppression. In the current wave of excitement around generative AI, the voices of those protecting human rights at the frontlines have rarely been more vital. 3️⃣ The only way to align democratic values with technology goals is by both placing responsibility and establishing accountability across the whole information and AI ecosystem. Thanks to Kat Duffy and Kyle Fendorf for publishing the piece. https://lnkd.in/ewp8Sper #UDHR75 #UDHR

  • Disinformation is a "wicked problem"—complex, multi-faceted, and challenging to counter without risking unintended consequences. Tackling it with a “do no harm” policy approach requires nuanced, adaptable strategies that respect freedom of expression and reinforce the foundations of democratic governance. In my mid-career Master’s in Public Policy at Princeton School of Public and International Affairs I've encountered this excellent Carnegie Endowment for International Peace policy guide. It offers actionable, balanced approaches based on evidence and case studies that can truly boost policy approaches to counter disinformation. 💡 Key strategies include: Empowering Local Journalism: When local news sources disappear, disinformation spreads like wildfire. Strengthening local journalism revives civic trust, keeps communities informed, and builds a first line of defense against disinformation. #DemocracyDiesInDarkness Building Media Literacy: Teaching critical media skills across communities and schools equips individuals to spot manipulation and build resilience against false information. Prioritizing Transparency with Fact-Checking: Going beyond labels, fact-checking that promotes transparency enables audiences to make informed choices, fostering trust without policing beliefs. Adjusting Algorithms & Limiting Microtargeting: Creating healthier online spaces by limiting microtargeted ads and rethinking algorithms reduces echo chambers while respecting autonomy. Counter-Messaging with Local Voices: Developing counter-messaging strategies that engage trusted community voices enables us to challenge false narratives effectively and authentically. These approaches are essential for defending open dialogue, strengthening governance, and supporting sustainable development. It's all hands on deck! https://lnkd.in/egKKmAqh 🌐 #Disinformation #DoNoHarm #LocalJournalism #FreedomOfExpression #PublicPolicy #CivicTrust cc Melissa Fleming Charlotte Scaddan Rosemary Kalapurakal Alice Harding Shackelford Roberto Valent Allegra Baiocchi (she/her/ella) Danilo Mora Carmen Lucia Morales Liliana Liliana Garavito George Gray Molina Marcos Neto Kersten Jauer

  • View profile for Delphine Colard

    Spokesperson and Head of Spokesperson’s Unit

    3,684 followers

    The relationship between freedom of expression and disinformation is a hot topic today, but there remains significant confusion about what each truly entails. Disinformation is generally defined as false or misleading content that is intentionally created or shared to deceive and cause public harm. When talking about fighting against disinformation, the main focus is NOT placed on individual opinions or statements nor the right to express them—freedom of speech is a core EU value that we cherish and uphold. Instead, the emphasis is placed on deceptive behaviors: the techniques and tactics deliberately used to manipulate and mislead us. The DISARM framework currently describes 294 techniques: https://lnkd.in/e6YNZnYq. Some of the most common are: - Manipulating platform algorithms - Harassing people based on identities - Creating inauthentic news sites Disinformation actors often use content aligned with the beliefs and values we already hold to manipulate us.    That's why the key challenge lies in finding the right balance: protecting people from the harmful effects of disinformation while safeguarding freedom of expression and ensuring access to reliable and trustworthy information. This new episode will help you to identify this widely-used manipulation technique and what you can do to avoid it: https://lnkd.in/edw3kBAF #DontBeDeceived, democracy is built on facts, and safeguarding it demands collective efforts from all members of society.

  • View profile for Michael Goodman

    Instructor, U.S. Air Force

    2,098 followers

    Executive Summary: PRC cognitive warfare strategies now include the cultivation of internet influencers who disseminate rumors and on platforms like YouTube designed to undermine Taiwan’s democratic institutions. TikTok has become a significant tool in shaping public opinion, exploiting its algorithmic power to spread narratives favourable to Beijing and critical of the United States, especially concerning the 2024 election in Taiwan. Taiwan’s commitment to freedom of speech complicates efforts to regulate platforms like TikTok, with nearly 5 million users exposed to PRC-influenced narratives, posing a challenge to democratic resilience and information integrity. The response to disinformation requires collective action, including regulatory measures, digital literacy education, international investigations into social media platforms’ operations, and global cooperation to uphold transparency and accountability standards.

  • View profile for Amit Jaju
    Amit Jaju Amit Jaju is an Influencer

    Global Partner | LinkedIn Top Voice - Technology & Innovation | Forensic Technology & Investigations Expert | Gen AI | Cyber Security | Global Elite Thought Leader - Who’s who legal | Views are personal

    13,821 followers

    Recent viral videos featuring two A-list Bollywood actors criticizing the Indian Prime Minister and endorsing the opposition party have sparked concern amidst India's ongoing general election frenzy.   With over half a million views on social media within a week, these misleading clips shed light on the alarming potential of AI-generated content to sway public opinion during the mammoth Indian election, currently underway and set to continue until June. The ongoing Lok Sabha polls, spanning seven phases until June 1st, provide ample opportunities for malicious actors to exploit these tools to manipulate voters. The Election Commission of India's proactive measures, including standard operating procedures for combatting fake news, are commendable steps towards safeguarding the integrity of the electoral process. However, the threat posed by deepfake videos and voice cloning cannot be underestimated. Next actions to take: 🔹As citizens, it is imperative that we remain vigilant and discerning in our consumption of online content, critically evaluating the authenticity of information presented to us. 🔹Furthermore, collaboration between law enforcement agencies and social media companies is essential to swiftly detect and remove fraudulent content.  🔹In the quest for a free and fair electoral process, combating the spread of misinformation must be a collective endeavour - the onus is on both policymakers and individuals to adopt stringent measures and cultivate digital literacy, combating the proliferation of AI-generated misinformation. Here's my previous take on deepfake regulations: https://lnkd.in/dNFkWNW8 Here's an article where I shared my thoughts on decoding deepfakes and how to protect oneself from becoming victims: https://lnkd.in/ddPVt_xP #DigitalLiteracy #AI #Deepfake #IndianElections

Explore categories