How to Ensure Compliance With Privacy Laws

Explore top LinkedIn content from expert professionals.

Summary

Compliance with privacy laws is all about ensuring that organizations handle personal data responsibly and transparently, in line with legal regulations designed to protect individuals' data rights. With evolving privacy laws across regions, companies need clear strategies to manage privacy risks and fulfill their obligations under frameworks like GDPR, CCPA, and the AI Act.

  • Conduct a comprehensive data audit: Identify all personal data your organization collects, where it is stored, how it is used, and with whom it is shared to establish a clear data inventory for compliance purposes.
  • Update privacy notices and policies: Regularly review and revise your privacy policies to include disclosures about AI usage and data processing practices, ensuring they comply with specific regional laws and customers' rights.
  • Obtain consent and honor user rights: Always seek explicit consent before using sensitive data and ensure processes are in place to allow users to access, delete, or opt out of data usage as required by law.
Summarized by AI based on LinkedIn member posts
  • View profile for Sam Castic

    Privacy Leader and Lawyer; Partner @ Hintze Law

    3,767 followers

    The Oregon Department of Justice released new guidance on legal requirements when using AI. Here are the key privacy considerations, and four steps for companies to stay in-line with Oregon privacy law. ⤵️ The guidance details the AG's views of how uses of personal data in connection with AI or training AI models triggers obligations under the Oregon Consumer Privacy Act, including: 🔸Privacy Notices. Companies must disclose in their privacy notices when personal data is used to train AI systems. 🔸Consent. Updated privacy policies disclosing uses of personal data for AI training cannot justify the use of previously collected personal data for AI training; affirmative consent must be obtained. 🔸Revoking Consent. Where consent is provided to use personal data for AI training, there must be a way to withdraw consent and processing of that personal data must end within 15 days. 🔸Sensitive Data. Explicit consent must be obtained before sensitive personal data is used to develop or train AI systems. 🔸Training Datasets. Developers purchasing or using third-party personal data sets for model training may be personal data controllers, with all the required obligations that data controllers have under the law. 🔸Opt-Out Rights. Consumers have the right to opt-out of AI uses for certain decisions like housing, education, or lending. 🔸Deletion. Consumer #PersonalData deletion rights need to be respected when using AI models. 🔸Assessments. Using personal data in connection with AI models, or processing it in connection with AI models that involve profiling or other activities with heightened risk of harm, trigger data protection assessment requirements. The guidance also highlights a number of scenarios where sales practices using AI or misrepresentations due to AI use can violate the Unlawful Trade Practices Act. Here's a few steps to help stay on top of #privacy requirements under Oregon law and this guidance: 1️⃣ Confirm whether your organization or its vendors train #ArtificialIntelligence solutions on personal data.  2️⃣ Validate your organization's privacy notice discloses AI training practices. 3️⃣ Make sure organizational individual rights processes are scoped for personal data used in AI training. 4️⃣ Set assessment protocols where required to conduct and document data protection assessments that address the requirements under Oregon and other states' laws, and that are maintained in a format that can be provided to regulators.

  • View profile for Odia Kagan

    CDPO, CIPP/E/US, CIPM, FIP, GDPRP, PLS, Partner, Chair of Data Privacy Compliance and International Privacy at Fox Rothschild LLP

    24,222 followers

    "The one where the EU companies get surprised" (with how much privacy compliance they still need to do in the US) - Is the #privacyFriends episode name I would give for my talk today at the Forum Rettsinformatikk in Oslo (remotely): 1. US laws (old and new) apply to EU companies: Even with no US boots on the ground; They apply directly (active website; or US entity; process information of individuals in the state + thresholds); and they apply to you as a service provider/data processor. 2. The US is no longer a #privacywildwest. Art 5 GDPR concepts of data minimization, purpose limitation, data retention limitation, fair and lawful, necessary and proportionate etc - got exported in the new US state laws (#CPRA, #CPA #UCPA #CTDPA #VCDPA) and increased FTC enforcement. What you can't do in EU you may not be able to do in the US either. Surprise! 3. Your privacy notices need amending! - Add things like categories, verification methods, additional rights and special consumer rights methods. - Have notices at collection - Add notices of financial incentive and the required calculations 4. Figure out the do not sell / share thing. - Analyze all your disclosures and see if they are a sale/share (both offline and to sharing through trackers/cookies) - Get compliant cookie management platform that recognizes Global Privacy Controls (GPC) 5. Address the online trackers. This is not a drill or (just) a potential regulator enforcement. The class-action struggle is real! and the Federal Trade Commission is enforcing on this (GoodRx, BetterHelp, Easy Healthcare - Pay special attention to video tracking (VPPA) and session replay (b/c wiretapping). 6. Figure out your biometrics Illinois BIPA enforcement is on the rise with 9 digit court awards and there are lots of state copycat laws. The FTC is also coming after your biometrics for false or misleading disclosures, and unreliable AI. 7. Figure out your health data. In the wake of Dobbs, the FTC is coming after your sensitive data as are the lawsuits under the new Washington Sate My Health My Data law. US privacy law definition of sensitive information is > Art 9 personal data, you need an opt out /opt in and a #DPIA. 8. Figure out your use of children's information The FTC is coming after #COPPA violations with high fines and other remedies. Even beyond the under 13s, the Age Appropriate Design Code is coming to California (& other states) with strict design requirements and enforcement. 9. Our DPIAs are bigger than yours! There are a lot more cases requiring a DPIA than under GDPR so you need and you may need to upgrade the content of your DPIA too. 10. Our C2C data sharing is bigger than yours! We see your Art 28 DPA we raise it a few provisions; but our business-third party agreement is way more detailed than Art 26 GDPR re: joint controller). It was a pleasure discussing this and more. Thank you to Øystein Flagstad for inviting me! #dataprivacy #dataprotection #privacyFOMO

  • View profile for Richard Lawne

    Privacy & AI Lawyer

    2,679 followers

    The EDPB recently published a report on AI Privacy Risks and Mitigations in LLMs.   This is one of the most practical and detailed resources I've seen from the EDPB, with extensive guidance for developers and deployers. The report walks through privacy risks associated with LLMs across the AI lifecycle, from data collection and training to deployment and retirement, and offers practical tips for identifying, measuring, and mitigating risks.   Here's a quick summary of some of the key mitigations mentioned in the report:   For providers: • Fine-tune LLMs on curated, high-quality datasets and limit the scope of model outputs to relevant and up-to-date information. • Use robust anonymisation techniques and automated tools to detect and remove personal data from training data. • Apply input filters and user warnings during deployment to discourage users from entering personal data, as well as automated detection methods to flag or anonymise sensitive input data before it is processed. • Clearly inform users about how their data will be processed through privacy policies, instructions, warning or disclaimers in the user interface. • Encrypt user inputs and outputs during transmission and storage to protect data from unauthorized access. • Protect against prompt injection and jailbreaking by validating inputs, monitoring LLMs for abnormal input behaviour, and limiting the amount of text a user can input. • Apply content filtering and human review processes to flag sensitive or inappropriate outputs. • Limit data logging and provide configurable options to deployers regarding log retention. • Offer easy-to-use opt-in/opt-out options for users whose feedback data might be used for retraining.   For deployers: • Enforce strong authentication to restrict access to the input interface and protect session data. • Mitigate adversarial attacks by adding a layer for input sanitization and filtering, monitoring and logging user queries to detect unusual patterns. • Work with providers to ensure they do not retain or misuse sensitive input data. • Guide users to avoid sharing unnecessary personal data through clear instructions, training and warnings. • Educate employees and end users on proper usage, including the appropriate use of outputs and phishing techniques that could trick individuals into revealing sensitive information. • Ensure employees and end users avoid overreliance on LLMs for critical or high-stakes decisions without verification, and ensure outputs are reviewed by humans before implementation or dissemination. • Securely store outputs and restrict access to authorised personnel and systems.   This is a rare example where the EDPB strikes a good balance between practical safeguards and legal expectations. Link to the report included in the comments.   #AIprivacy #LLMs #dataprotection #AIgovernance #EDPB #privacybydesign #GDPR

  • View profile for Colin S. Levy
    Colin S. Levy Colin S. Levy is an Influencer

    General Counsel @ Malbek - CLM for Enterprise | Adjunct Professor of Law | Author of The Legal Tech Ecosystem | Legal Tech Educator | Fastcase 50 (2022)

    45,700 followers

    As a veteran SaaS lawyer, I've watched Data Processing Agreements (DPAs) evolve from afterthoughts to deal-breakers. Let's dive into why they're now non-negotiable and what you need to know: A) DPA Essentials Often Overlooked: -Subprocessor Management: DPAs should detail how and when clients are notified of new subprocessors. This isn't just courteous - it's often legally required. -Cross-Border Transfers: Post-Schrems II, mechanisms for lawful data transfers are crucial. Standard Contractual Clauses aren't a silver bullet anymore. -Data Minimization: Concrete steps to ensure only necessary data is processed. Vague promises don't cut it. -Audit Rights: Specific procedures for controller-initiated audits. Without these, you're flying blind on compliance. -Breach Notification: Clear timelines and processes for reporting data breaches. Every minute counts in a crisis. B) Why Cookie-Cutter DPAs Fall Short: -Industry-Specific Risks: Healthcare DPAs need HIPAA provisions; fintech needs PCI-DSS compliance clauses. One size does not fit all. -AI/ML Considerations: Special clauses for automated decision-making and profiling are essential as AI becomes ubiquitous. -IoT Challenges: Addressing data collection from connected devices. The 'Internet of Things' is a privacy minefield. -Data Portability: Clear processes for returning data in usable formats post-termination. Don't let your data become a hostage. -Privacy by Design: Embedding privacy considerations into every aspect of data processing. It's not just good practice - it's the law. In 2024, with GDPR fines hitting €1.4 billion, generic DPAs are a liability, not a safeguard. As AI and IoT reshape data landscapes, DPAs must evolve beyond checkbox exercises to become strategic tools. Remember, in the fast-paced tech industry, knowledge of these agreements isn't just useful – it's essential. They're not just legal documents – they're the foundation for innovation and collaboration in our digital age. Pro tip: Review your DPAs quarterly. The data world moves fast - your agreements should keep pace. Pay special attention to changes in data protection laws, new technologies you're adopting, and shifts in your data processing activities. Clear, well-structured DPAs prevent disputes and protect all parties' interests. What's the trickiest DPA clause you've negotiated? Share your war stories below. #legaltech #innovation #law #business #learning

  • View profile for Montgomery Singman
    Montgomery Singman Montgomery Singman is an Influencer

    Managing Partner @ Radiance Strategic Solutions | xSony, xElectronic Arts, xCapcom, xAtari

    26,763 followers

    On August 1, 2024, the European Union's AI Act came into force, bringing in new regulations that will impact how AI technologies are developed and used within the E.U., with far-reaching implications for U.S. businesses. The AI Act represents a significant shift in how artificial intelligence is regulated within the European Union, setting standards to ensure that AI systems are ethical, transparent, and aligned with fundamental rights. This new regulatory landscape demands careful attention for U.S. companies that operate in the E.U. or work with E.U. partners. Compliance is not just about avoiding penalties; it's an opportunity to strengthen your business by building trust and demonstrating a commitment to ethical AI practices. This guide provides a detailed look at the key steps to navigate the AI Act and how your business can turn compliance into a competitive advantage. 🔍 Comprehensive AI Audit: Begin with thoroughly auditing your AI systems to identify those under the AI Act’s jurisdiction. This involves documenting how each AI application functions and its data flow and ensuring you understand the regulatory requirements that apply. 🛡️ Understanding Risk Levels: The AI Act categorizes AI systems into four risk levels: minimal, limited, high, and unacceptable. Your business needs to accurately classify each AI application to determine the necessary compliance measures, particularly those deemed high-risk, requiring more stringent controls. 📋 Implementing Robust Compliance Measures: For high-risk AI applications, detailed compliance protocols are crucial. These include regular testing for fairness and accuracy, ensuring transparency in AI-driven decisions, and providing clear information to users about how their data is used. 👥 Establishing a Dedicated Compliance Team: Create a specialized team to manage AI compliance efforts. This team should regularly review AI systems, update protocols in line with evolving regulations, and ensure that all staff are trained on the AI Act's requirements. 🌍 Leveraging Compliance as a Competitive Advantage: Compliance with the AI Act can enhance your business's reputation by building trust with customers and partners. By prioritizing transparency, security, and ethical AI practices, your company can stand out as a leader in responsible AI use, fostering stronger relationships and driving long-term success. #AI #AIACT #Compliance #EthicalAI #EURegulations #AIRegulation #TechCompliance #ArtificialIntelligence #BusinessStrategy #Innovation 

  • View profile for Mani Keerthi N

    Cybersecurity Strategist & Advisor || LinkedIn Learning Instructor

    17,355 followers

    On Protecting the Data Privacy of Large Language Models (LLMs): A Survey From the research paper: In this paper, we extensively investigate data privacy concerns within Large LLMs, specifically examining potential privacy threats from two folds: Privacy leakage and privacy attacks, and the pivotal technologies for privacy protection during various stages of LLM privacy inference, including federated learning, differential privacy, knowledge unlearning, and hardware-assisted privacy protection. Some key aspects from the paper: 1)Challenges: Given the intricate complexity involved in training LLMs, privacy protection research tends to dissect various phases of LLM development and deployment, including pre-training, prompt tuning, and inference 2) Future Directions: Protecting the privacy of LLMs throughout their creation process is paramount and requires a multifaceted approach. (i) Firstly, during data collection, minimizing the collection of sensitive information and obtaining informed consent from users are critical steps. Data should be anonymized or pseudonymized to mitigate re-identification risks. (ii) Secondly, in data preprocessing and model training, techniques such as federated learning, secure multiparty computation, and differential privacy can be employed to train LLMs on decentralized data sources while preserving individual privacy. (iii) Additionally, conducting privacy impact assessments and adversarial testing during model evaluation ensures potential privacy risks are identified and addressed before deployment. (iv)In the deployment phase, privacy-preserving APIs and access controls can limit access to LLMs, while transparency and accountability measures foster trust with users by providing insight into data handling practices. (v)Ongoing monitoring and maintenance, including continuous monitoring for privacy breaches and regular privacy audits, are essential to ensure compliance with privacy regulations and the effectiveness of privacy safeguards. By implementing these measures comprehensively throughout the LLM creation process, developers can mitigate privacy risks and build trust with users, thereby leveraging the capabilities of LLMs while safeguarding individual privacy. #privacy #llm #llmprivacy #mitigationstrategies #riskmanagement #artificialintelligence #ai #languagelearningmodels #security #risks

  • View profile for Katharina Koerner

    AI Governance & Security I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,371 followers

    This article by Sona Sulakian, CEO of Pincites, is a great resource for drafting AI-specific contract clauses. It discusses the importance of including a comprehensive AI addendum in vendor contracts to manage the unique challenges posed by AI integration. It suggests specific contractual clauses to balance responsibilities and protect the interests of both customers and vendors. Link to article: https://lnkd.in/g-qHdmfM The article covers clauses that address issues such as data ownership, usage rights, model training restrictions, compliance with laws, ethical AI usage, and liability for AI outputs. * * * Examples of contract clauses: --> Require Prior Consent for AI Features: This ensures that vendors cannot implement or offer AI features without the customer's explicit consent, maintaining the customer’s control over AI deployments. --> Define Data Ownership and Usage Rights: The clauses specify that all data provided by the customer, and outputs generated by AI, remain the customer's property, protecting their data rights and limiting the vendor's use of this data. --> Prohibit Model Training with Customer Data: This protects sensitive customer data from being used to enhance vendor’s AI models unless explicitly permitted, safeguarding proprietary information. --> Mandate Compliance with Applicable Laws: Vendors must comply with relevant data protection laws and industry standards, ensuring AI features are legally compliant and ethically managed. --> Ensure Responsible and Ethical AI Use: Vendors are required to demonstrate transparent and unbiased AI use, aligning their operations with ethical standards to mitigate risks such as unfair decision-making. Set Limitations of Liability for AI Outputs: Vendors are held accountable for any errors or damages arising from AI outputs, emphasizing the need for accurate and reliable AI systems. * * * Thank you to the author! Infographics by Pincites, see LinkedIn post by Sona Sulakian: https://lnkd.in/geudP7yU

  • View profile for Debbie Reynolds

    The Data Diva | Global Data Advisor | Retain Value. Reduce Risk. Increase Revenue. Powered by Cutting-Edge Data Strategy

    39,918 followers

    🧠 “Data systems are designed to remember data, not to forget data.” – Debbie Reynolds, The Data Diva 🚨 I just published a new essay in the Data Privacy Advantage newsletter called: 🧬An AI Data Privacy Cautionary Tale: Court-Ordered Data Retention Meets Privacy🧬 🧠 This essay explores the recent court order from the United States District Court for the Southern District of New York in the New York Times v. OpenAI case. The court ordered OpenAI to preserve all user interactions, including chat logs, prompts, API traffic, and generated outputs, with no deletion allowed, not even at the user's request. 💥 That means: 💥“Delete” no longer means delete 💥API business users are not exempt 💥Personal, confidential, or proprietary data entered into ChatGPT could now be locked in indefinitely 💥Even if you never knew your data would be involved in litigation, it may now be preserved beyond your control 🏛️ This order overrides global privacy laws, such as the GDPR and CCPA, highlighting how litigation can erode deletion rights and intensify the risks associated with using generative AI tools. 🔍 In the essay, I cover: ✅ What the court order says and why it matters ✅ Why enterprise API users are directly affected ✅ How AI models retain data behind the scenes ✅ The conflict between privacy laws and legal hold obligations ✅ What businesses should do now to avoid exposure 💡 My recommendations include: • Train employees on what not to submit to AI • Curate all data inputs with legal oversight • Review vendor contracts for retention language • Establish internal policies for AI usage and audits • Require transparency from AI providers 🏢 If your organization is using generative AI, even in limited ways, now is the time to assess your data discipline. AI inputs are no longer just temporary interactions; they are potentially discoverable records. And now, courts are treating them that way. 📖 Read the full essay to understand why AI data privacy cannot be an afterthought. #Privacy #Cybersecurity #datadiva#DataPrivacy #AI #LegalRisk #LitigationHold #PrivacyByDesign #TheDataDiva #OpenAI #ChatGPT #Governance #Compliance #NYTvOpenAI #GenerativeAI #DataGovernance #PrivacyMatters

  • View profile for Victoria Beckman

    Associate General Counsel - Cybersecurity & Privacy

    31,581 followers

    The Future of Privacy Forum and OneTrust have published an updated guide to help organizations navigate Conformity Assessments (CAs) under the final version of the EU #Artificial Intelligence Act. CAs are a cornerstone of the EU AI Act's compliance framework and will be critical for any organization developing or deploying high-risk #AIsystems in the EU. The guide offers a clear and practical framework for assessing whether, when, and how a CA must be conducted. It also clarifies the role of CAs as an overarching accountability mechanism within the #AIAct. This guide: - Provides a step-by-step roadmap for conducting a Conformity Assessment under the EU AI Act. - Presents CAs as essential tools for ensuring both product safety and regulatory compliance. - Identifies the key questions organizations must ask to determine if they are subject to CA obligations. - Explains the procedural differences between internal and third-party assessments, including timing and responsibility. - Details the specific compliance requirements for high-risk #AI systems. - Highlights the role of documentation and how related obligations intersect with the CA process. - Discusses the use of harmonized standards and how they can create a presumption of conformity under the Act. This guide serves as a practical resource for understanding the conformity assessment process and supporting organizations in preparing for compliance with the EU AI Act.

  • View profile for AD E.

    GRC Visionary | Cybersecurity & Data Privacy | AI Governance | Pioneering AI-Driven Risk Management and Compliance Excellence

    10,190 followers

    You’re the new Privacy Analyst at a U.S. retail company. Your manager just asked you to ensure the company is compliant with the California Consumer Privacy Act (CCPA), but you quickly realize there’s no data inventory or record of what personal data is being collected, where it’s stored, or who it’s shared with. How would you even begin? First, you’d start by building a data inventory — that means identifying what personal data the company collects (names, emails, browsing history, etc.), how it’s collected (forms, cookies, third-party platforms), and where it lives (CRM, marketing tools, cloud storage, etc.). You’d likely send out a questionnaire or meet with key teams (marketing, IT, sales) to gather this info. Then, you’d map the data flows — what systems touch this data, who has access, and whether it gets sent to vendors or service providers. This is essential for understanding risk and creating compliant privacy notices. Finally, you’d document it all and check it against the CCPA requirements — can users request access to their data? Can they delete it? Is there a way to opt out of data selling? This is GRC work in action.. breaking down compliance into trackable steps and helping the business stay accountable.

Explore categories